text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
java struts error - Struts
java struts error
my jsp page is
post the problem... loginaction page is
package com.ssss.struts;
import org.apache.struts.action....;
lf.getUsername();
lf.getPassword();
return am.findForward("success");
}
}
my:-
<
Unable to understand Struts - Struts
but I did'nt get successfull msg.
it doesnat forwads on the success page... {
/* forward name="success" path="" */
private static final String SUCCESS = "success";
/**
* This is the action called
Page Refresh - Struts
Page Refresh Hi How Can we control the page does not go to refersh after its forward again from action class.in my form i have one dropdownlist... in same page and fwd the action to same page
java - Struts
java This is my login jsp page::
function...:
Submit
struts...("password")) {
// we are in
return mapping.findForward("success");
} else
java - Struts
,struts-config.xml,web.xml,login form ,success and failure page also...
code...java hi..,
i wrote login page with hardcodeted username and password ,but when i submit the page ,i give blank page...
one hint: error
java - Struts
/loginpage.jsp
login page ::
function validate(objForm...
Success
Hi ,
You Are Successfully Loged in as
failure.jsp...!
struts-config.xml
Basic Steps to Outsourcing Success, Steps to Success in Outsourcing
Basic Steps to Outsourcing Success
Introduction
There are a few fundamental steps to ensure the best results for your outsourcing venture. These strategies help to reap maximum benefits as also avoid pitfalls and complications.
Remember
Struts - Struts
Struts Hello
I have 2 java pages and 2 jsp pages in struts... success
registrationForm.java for form beans
Success.jsp... with source code to solve the problem.
For read more information on Struts visit
Struts - Struts
Struts Hello
I like to make a registration form in struts inwhich... course then page is redirected to that course's subjects. Also all subject should be displayed on single dynamic page according to selected by student.
pls send
program code for login page in struts by using eclipse
program code for login page in struts by using eclipse I want program code for login page in struts by using
used in a struts aplication.
these are the conditions
1. when u entered into this page the NEXT BUTTON must disabled
2. if u enter any text"
"
Portal Development ' A Step Towards The Success Of Your Business
Portal Development – A Step Towards The Success Of Your Business
Web... with a search engine results page but it is much more detailed, providing you... internet surfer and allows anyone access to the information linked to the page
Struts - Struts
Exception {
return mapping.findForward("success - Struts
button it has to go to another page.. for that one i taken two form tags..
i
Struts Forward Action Example
;
<p><html:linkCall the Success page</html...;
<html:linkStruts Forward Action</html... Struts Forward Action Example
Struts Tag:
Struts Tag:
bean:struts Tag -is used to create a new bean containing one of the
standard Struts framework configuration objects. This tag retrieve the value of the specified Struts
Struts tag - Struts
Struts tag I am new to struts,
I have created a demo struts application in netbean,
Can any body please tell me what are the steps to add new tags to any jsp page
Struts Videos
Struts Videos
Watch the latest videos on YouTube.com
Redirection in struts - Struts
sendredirect can we forward a page in struts Hi
There are more ways to forward one page to another page in struts.
For Details you can click here:
delete and edit options in struts
delete and edit options in struts Hi, I am doing an web application using struts, jsp, tomcat server, oracle as a database in netbeans IDE 7.1.2, I... : admin
--%>
<%@page contentType="text/html" pageEncoding="UTF-8"%>
code problem - Struts
=mapping.findForward(SHEET_EXCEPTION_PAGE);
}
System.out.println("....ok..2.");
ActionForward af=mapping.findForward(SHEET_NEW_SUCCESS...=mapping.findForward(SHEET_EXCEPTION_PAGE);
}
ActionForward af=mapping.findForward
Struts 2.2.1
Struts 2.2.1 released
The latest version of Struts framework is released. The new version
is Struts 2.2.1 and it includes many new features
struts internationalisation - Struts
struts internationalisation hi friends
i am doing struts iinternationalistaion in the site... the welcome page it always defaultly take the french
plz give solution for Layout Examples - Struts
Struts Layout Examples Hi,
Iam trying to create tabbed pages using the struts layout tag.
I see the tab names on the page but they cannot...://
Thanks.
Amarde
Doubts on Struts 1.2 - Struts
Doubts on Struts 1.2 Hi,
I am working in Struts 1.2. My requirement is to display data in a multiple selected box from database in a JSP page.
Can... visit for more information.
Thanks
Struts Tag Lib - Struts
Struts Tag Lib Hi
i am a beginner to struts. i dont have...
Defines a tag library and prefix for the custom tags used in the JSP page.
JSP Syntax
Examples in Struts :
Description
The taglib
validation problem in struts - Struts
project using struts framework. so i made a user login form for user authentication...;
ResultSet rs = null;
Statement statement = null;
boolean success...))
{
success = true;
return mapping.findForward Roseindia
Struts 2 Framework is used to develop enterprise web application....
Struts 2 Framework encourages Model-View-Controller based architecture... parts. Struts 2 framework builds, develops and maintains the whole application
pls review my code - Struts
pls review my code Hello friends,
This is the code in struts. when i click on the submit button.
It is showing the blank page. Pls respond soon...))
return mapping.findForward("success");
else
Struts 2.0 - Struts
Struts 2.0 Hi ALL,
I am getting following error when I am trying to use tag.
tag 'select', field 'list': The requested list key 'day' could...");
day.add("Saturday");
return SUCCESS
Struts 2 Application
Struts 2 Application
Developing user registration application based on
Struts 2 Framework
This Struts 2 Application is a simple user registration
application that will help you to learn how to develop real world applications using
Struts.1.8 Login Form
page</a>
</body>
</html>
Success page...
Struts 2.1.8 Login Form
... in
Struts 2.8.1. The Struts 2 framework provides tags for creating the UI forms
Struts framework
Struts framework How to populate the value in textbox of one JSP page from listbox of another JSP page using Struts-html tag and java script
How Struts Works
page, Velocity templates, XSLT pages etc. In struts there are set of JSP... for many
page requests. Struts provides the ActionForm and the Action classes which...
How Struts Works
Skinning example in Struts 2.2.1
Skinning example in Struts 2.2.1
An example of Skinning is given below
login.jsp
<%@ page contentType="text/html; charset=UTF-8" %>
<%@ taglib prefix="s" uri="/struts-tags"%>
<s:head/>
validations - Struts
validations log in page with validations user name must be special character and one number and remining is alphabetes in struts
Calling Action on form load - Struts
... this list is coming from the action which i m calling before the page is being... to direct user directly to this page i m calling an action which is preparing a list... with no ActionForm is the LogoffAction in the Struts MailReader application:
<
Configuring Actions in Struts application
Configuring Actions in Struts Application
To Configure an action in struts... Exception {
// TODO Auto-generated method stub
return SUCCESS;
}
}
And also write any JSP or HTML page
homepage.html
<!DOCTYPE HTML PUBLIC 2 Login Application
page in
the using Struts 2 framework.
Develop Login Form...
The loginsuccess.jsp page displays the Login Success message when
user is authenticated...Struts 2 Login Application
Reply - Struts
Reply Hello Friends,
please write the code in struts and send me I want to display "Welcome to Struts" please send me code its very urgent...
and which file need to be run and compile struts without using database
logout - Struts
logout how to make code in struts if i click the logout button then no body can back refresh the page , after logout nobody will able to check the page
Thanks
Anil Hi anil,
//...
session.removeAttribute
file download in struts - Struts
used validator in struts but it didn't worked...
i used validate() in form bean... entered into the page
n the list content are invisible...
HOw can i validate File Upload Example
;multipart/form-data".
code for the success page (uploadsuccess.jsp... the
form.
<li>
<html:linkStruts File...
Struts File Upload Example
login page
login page hi i'm trying to create login page using jsp. i get... the program. Following are my programs...
1.index.jsp
<%@page contentType="text/html" pageEncoding="UTF-8"%>
JSP Page
Struts Reference
Struts Reference
Welcome to the Jakarta Online Reference page, you will find everything you
need to know to quick start your Struts Project. Here we are providing you
detailed Struts Reference. This online struts
Struts validations
Struts validations Can you please provide the login page example
in struts1.x and struts 2.x
Where the login page should interact with the DB... to success.jsp if not should be stay in the same page.
Replay would be highly taken
Facing Problem with submit and cancel button in same page - Struts
Facing Problem with submit and cancel button in same page Hi,
can u please help me out.I have placed submit and cancel button in the jsp page... friend,
Read for more
Admin Page
Admin Page Hi All,
I have to develop Java Web Application using Struts. Can someone please tell me how to code for administration login .i.e when admin login he should have full access whereas others login they should have
java - Struts
java how can i get dynavalidation in my applications using struts... :
*)The form beans of DynaValidatorForm are created by Struts and you configure in the Struts config :
*)The Form Bean can be used
About Struts 2.2.1 Login application
and
password. If both is valid then it return SUCCESS otherwise it add the error... / browser.
To create an login application At first create a login page in JSP(Java
Server Pages). All the JSP files written for struts uses the struts
Struts 2 Redirect Action
Struts 2 Redirect Action
In this section, you will get familiar with struts 2 Redirect
action and learn to use it in the struts 2 application.
Redirect After Post:
This post
Redirect page
me ,how to redirect page in Tiles With JSTL .
<%@ taglib prefix="tiles"uri="" %>
<%@ taglib prefix="c"uri.../jsp/jstl/sql" prefix="sql" %>
<%@page contentType="text/html" pageEncoding
Forward - Struts
Forward Hi
how to get the return value of execute() in Action class.
we will write return mapping.findForward("success");
how to get the value of forward. Is there any method to get the which forward is executing
java - Struts
java code for login page using struts without database ...but using... page
User Name... Saved
Next Page to view the session value 2 Interceptor Example
; uri="/struts-tags" %>
<%@page import="...Struts 2 Interceptor Example
Interceptor is an object which intercepts...;
</action>
Consider an example of Struts interceptor given bellow
Struts + HTML:Button not workin - Struts
Struts + HTML:Button not workin Hi,
I am new to struts. So pls... in same JSP page.
As a start, i want to display a message when my actionclass...:
http
mapping.findForward("success");
}else{
errors.add("invalid File Upload and Save
;
code for the success page (downloaduploadedfile.jsp...;html:linkStruts File... Struts File Upload and Save
Error - Struts
Error Hi,
I downloaded the roseindia first struts example... create the url for that action then
"Struts Problem Report
Struts has detected...
This is my index page
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles
|
http://roseindia.net/tutorialhelp/comment/66268
|
CC-MAIN-2015-22
|
refinedweb
| 1,946
| 66.13
|
SYNOPSIS
#include <sys/types.h>
#include <sys/spu.h>
int spu_create(const char *pathname, int flags, mode_t mode);
int spu_create(const char *pathname, int flags, mode_t mode,
int neighbor_fd); non-existing directory in the mount point of the SPU
file system cre-
ated con-
texts, func-
tionality is disabled for SPU_CREATE_NOSCHED contexts. Only a
subset of the files will be available in this context directory name. process has reached its maximum open files limit.
ENAMETOOLONG
pathname is too long.
ENFILE The system has reached the global open files con-
texts has been reached.
ENOSYS The functionality is not provided by the current system, because
VERSIONS
The spu_create() system call was added to Linux in kernel 2.6.16.
CONFORMING TO
This call is Linux-specific and only implemented on the PowerPC archi-
tecture.-
computing/linuxoncell/ for the recommended libraries.
EXAMPLE
see spu_run(2) for an example of the use of spu_create()
SEE ALSO
close(2), spu_run(2), capabilities(7), spufs(7)
COLOPHON
This page is part of release 3.23 of the Linux man-pages project. A
description of the project, and information about reporting bugs, can
be found at.
|
http://www.linux-directory.com/man2/spu_create.shtml
|
crawl-003
|
refinedweb
| 190
| 58.08
|
How do I exit a thread?
How do I kill thr() in this example? Does it have something to do with the id arg passed?
def thr(id): from time import sleep i = 0 while True: print(i) i += 1 sleep(delay) import _thread _thread.start_new_thread(thr, (1, 0))
Maybe should I do this?
mythr = _thread.start_new_thread(thr, (1, 0)) mythr.exit()
- paulpraveen23 paulpraveen23 Banned last edited by
This post is deleted!
- paulkapil08 paulkapil08 last edited by
Thanks for the clarification.
I think the concepts @larry-hems mentions apply to Micropython, but the function calls might differ.
If I understand correctly, all threads in micropython are daemon threads, as they continue running after command is returned to the REPL. Only after a
machine.reset()or pressing the reset button will the thread be closed automatically.
@larry-hems are you referring to regular Python or Pycom's implementation of MicroPython? Because I couldn't find a daemon flag, or multiprocessing as an option with Pycom's uPy.
No daemon flag:
No multiprocessing module:
- larry hems last edited by
@BetterAuto said in How do I exit a thread?:
How do I kill thr() in this example?
It is generally a bad pattern to kill a thread abruptly, in Python, and in any language. Think of the following cases:
- the thread is holding a critical resource that must be closed properly
- the thread has created several other threads that must be killed as well.
If you REALLY need to use a Thread, there is no way to kill thread directly. What you can do, however, is to use a "daemon thread". In fact, in Python, a Thread can be flagged as daemon:
If you do NOT really need to have a Thread , what you can do, instead of using the threading package , is to use the multiprocessing package . Here, to kill a process, you can simply call the method:
yourProcess.terminate()
Python will kill your process (on Unix through the SIGTERM signal, while on Windows through the TerminateProcess() call). Pay attention to use it while using a Queue or a Pipe! (it may corrupt the data in the Queue/Pipe)
This code based on @Eric24's suggestion does work. Ugly, but functional. I had trouble with my opening post code so I went back to the example in the docs and that works.
import _thread import time kill = dict() def th_func(delay, id): global kill while True: if id in kill and kill[id]: break time.sleep(delay) print('Running thread %d' % id) for i in range(2): kill[i] = False _thread.start_new_thread(th_func, (i + 1, i)) kill[0] = True kill[1] = True
And this fails. _thread.exit() does not work from outside the thread.
import _thread import time threads = dict() def th_func(delay, id): while True: time.sleep(delay) print('Running thread %d' % id) for i in range(2): threads[i] = _thread.start_new_thread(th_func, (i + 1, i)) threads[0].exit() threads[1].exit() >>> Running thread 0 Running thread 1 Running thread 0 Running thread 0 Running thread 1 Running thread 0 threads[0].exit() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'NoneType' object has no attribute 'exit' >>> threads[1].exit() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'NoneType' object has no attribute 'exit' >>> Running thread 0 Running thread 1 Running thread 0 >>>
+1, is it possible to kill the thread from outside ?
Thanks, I'll try it soon. (Don't have access to my WiPy at the moment.)
@BetterAuto Hmmm. The _thread.exit() function in the docs isn't exactly clear on how it should be used, but since it has no params, my best guess is that it would be executed from inside the thread you want to kill. You might also try just breaking out of the while True loop.
Now, if the thread itself doesn't know it's supposed to die, you might have to implement an appropriate communications channel from the main program to the thread (could be as simple as a global "kill" variable), as there does not appear to be a _thread function that can kill a particular thread from outside that thread. It's a bit strange that this doesn't exist, as it's a common thing found in many threading implementations.
|
https://forum.pycom.io/topic/1393/how-do-i-exit-a-thread
|
CC-MAIN-2022-33
|
refinedweb
| 719
| 72.76
|
GameFromScratch.com!
Art
2D Applications iOS
A question came up in a comment in the Scene2D part of the LibGDX tutorial series about re-using actions. You very much can re-use actions, so I decided to do it in post form here. This entire post is mostly just one large code sample. It’s just easier to do it here than in comments. For a greater context of what this code is doing, see the earlier linked tutorial section.
Here is a sample showing action re-use.
package com.gamefromscratch;
import com.badlogic.gdx.ApplicationListener; Scenetest implements ApplicationListener, InputProcessor {
public class MyActor extends Actor {
Texture texture = new Texture(Gdx.files.internal("badlogic.jpg"));
public MyActor(){
setBounds(getX(),getY(),texture.getWidth(),texture.getHeight());
}
@Override
public void draw(Batch batch, float alpha){
batch.draw(texture,this.getX(),getY());
}
}
private Stage stage;
private MyActor myActor;
MoveToAction moveToOrigin,moveToCenter;
@Override
public void create() {
stage = new Stage();
myActor = new MyActor();
myActor.setPosition(Gdx.graphics.getWidth()/2 - myActor.getWidth()/2,
Gdx.graphics.getHeight()/2 - myActor.getHeight()/2);
moveToOrigin = new MoveToAction();
moveToOrigin.setPosition(0f, 0f);
moveToOrigin.setDuration(2f);
moveToCenter = new MoveToAction();
moveToCenter.setPosition(Gdx.graphics.getWidth()/2 - myActor.getWidth()/2,
Gdx.graphics.getHeight()/2 - myActor.getHeight()/2);
moveToCenter.setDuration(2f);
myActor.addAction(moveToOrigin);
stage.addActor(myActor);
Gdx.input.setInputProcessor(this);
}
@Override
public void dispose() {
}
@Override
public void render() {
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
stage.act(Gdx.graphics.getDeltaTime());
stage.draw();
}
@Override
public void resize(int width, int height) {
}
@Override
public void pause() {
}
@Override
public void resume() {
}
@Override
public boolean keyDown(int keycode) {
if(keycode == Input.Keys.NUM_1)
if(myActor.getActions().contains(moveToOrigin,true)) {
// this is "A" currently running action, do nothing
// If you wanted you could restart the action which
// would cause the duration to start over, like so:
moveToOrigin.restart();
// This action will now have a 2 second tween between
// its current location and target
}
else {
moveToOrigin.reset();
myActor.addAction(moveToOrigin);
}
if(keycode == Input.Keys.NUM_2)
if(myActor.getActions().contains(moveToCenter,true)) {
// this is "A" currently running action so do nothing
}
else {
moveToCenter.reset();
myActor.addAction(moveToCenter);
}
return false;
}
;
}
}
When you run this code, the graphic will start at the center of the screen and move towards the origin, with a total duration of 2 seconds. If you press the 2 key, it will start an action to move back to the center of the screen. Pressing 1 will move back to the origin. Pressing 1 while a moveToOrigin is active will cause that action to restart, basically resetting the total duration back to 2 seconds again, just from your current position.
The only things to really be aware of here are that an Actor getActions() will only return actions that are currently run. When the action is finished, it is removed from the Actor. The other thing of note is, although you can reuse an Action, you will have to reset() it before you can add it again, or it will already be over. If it is currently run, calling restart() has the same effect as resetting.
Programming
LibGDX Tutorial 2D
In this tutorial we are going to look at loading and using Tiled TMX maps. Tiled is a free, open sourced map editor, and TMX is the file format it outputs. You basically use it to “paint” levels using one or more spritesheets containing tiles, which you then load and use in your game.
Here is Tiled in action, creating the simple map I am going to use in this example:
By the way, I downloaded the tile graphics from here. Additionally, you can download the generated TMX file we will be using here.
I am not going to go into detail on using the Tiled editor. I actually covered this earlier here. For Phaser however, just be certain you export as either JSON or CSV format and you should be good to go.
Now let’s look at some code to load the tile map.
/// <reference path="phaser.d.ts"/>
class SimpleGame {
game: Phaser.Game;
map: Phaser.Tilemap;");
}
render() {
}
create() {
this.map = this.game.add.tilemap("ItsTheMap", 32, 32, 50, 20);
this.map.addTilesetImage("castle_0", "Tiles");
this.map.createLayer("Background").resizeWorld();
this.map.createLayer("Midground");
this.map.createLayer("Foreground");
this.game.camera.x = this.map.layers[0].widthInPixels / 2;
this.game.camera.y = 0;
this.game.add.tween(this.game.camera).to({ x: 0 }, 3000).to({ x: this.map.layers[0].widthInPixels }, 3000).loop().start();
}
}
window.onload = () => {
var game = new SimpleGame();
};
And when you run it… assuming like me you are using Visual Studio 2013 you will probably see:
Hmmmm, that’s not good. Is there something wrong with our tilemap? Did we make a mistake?
Nope… welcome to the wonderful world of XHR requests. This is a common problem you are going to encounter over and over again when dealing with loading assets from a web server. If we jump into the debugger, we quickly get the root of the problem:
Let’s look closely at the return value in xhr.responseText:
Ohhh. it’s an IIS error message and the key line is:
The appropriate MIME map is not enabled for the Web site or application.
Ah…
See, Visual Studio ships with an embedded version of IIS called IIS Express, and frankly, IIS Express doesn’t have a clue what a JSON file is. Let’s solve that now. If you created a new TypeScript project in Visual Studio, it should have created a web.config file for you. If it didn’t create one and enter the following contents:
<?xml version="1.0" encoding="utf-8"?>
<!--
For more information on how to configure your ASP.NET application, please visit
-->
<configuration>
<system.web>
<compilation debug="true" targetFramework="4.5" />
<httpRuntime targetFramework="4.5" />
</system.web>
<system.webServer>
<staticContent>
<mimeMap fileExtension=".json" mimeType="application/json" />
</staticContent>
</system.webServer>
</configuration>
Now the code should run without error
I should take a moment to point out that this is an entirely Visual Studio specific solution. However, this particular problem is by no means limited to IIS Express. I documented a very similar problem when dealing with WebStorm’s integrated Chrome plugin. If your loadJson call fails, this is most likely the reason why! Well that or you typo’ed it. :)
Ok, assuming everything is configured right,now we should see:
By the way, you may have to click on the to get it to start rendering.
Most of the loading code should look pretty familiar by now, Phaser is remarkably consistent in its approach. There are a few things to be aware of though from that code. First, the order you create layers in is important. In Tiled I created 3 layers of tiles. A solid background layer named “background”, a middle layer with most of the tiles in it called “midground” then a detail layer for the topmost tiles named “foreground”. Think of rendering tiles like putting stickers on a flat surface… the front most stickers will obscure the ones that are behind them. The same is true for tiles. There are other options in tiled for creating foreground and background layers, but I stuck with normal tile layers for ease. Just no that more efficient options exist.
The next thing to be aware of is when I called addTilesetImage, that is the same image filename that I provided to Tiled. It is important that you use the same graphics files and names between Tiled and your code. The next thing worth noticing is the call to resizeWorld() I made when loading the first tiled layer. This simply set’s the world’s dimensions to be the same size as the tile layer you specified. Since all the tile layers are the same size, you could have called it on any of them. Finally, we made a simple tween that pans the camera from one end of the level to the other and back.
There is more to touch on with tiles, but I will have to cover that in later post(s).
Programming
2D Applications Phaser TypeScript Tutorial
Now we are going to look at available raster graphics programs available on iPad. While this post is part of an over all series about creating a game using only an iPad, this post should be of use to anyone looking to create art in general. The iPad ( and other tablets ) are becoming increasingly viable ways of creating art, especially 2D art. One major reason for this is cost. An iPad is generally cheaper than a PC/Mac + graphics tablet, but it’s software costs where this really becomes obvious. For example, the desktop version of Photoshop ( before it went subscription ) used to cost about $800. The tablet version of Photoshop… $10! Another desktop/tablet example is ArtRage, which ( although vastly cheaper ) is available at $50 on Desktop, it is only $5 on iPad. Granted, they aren’t identical products, and often the iPad version has less functionality, but I tend to find it has all the functionality I generally need. You mileage may vary.
So we are going to take a look at some of the Raster and Vector packages available on iPad. Raster and Pixel mean basically the same thing, although “Pixel Art” has often come to represent a very specific style, with fat chunky pixels like from the 8 and 16bit era. We will also look at two options aimed specifically at this style. We will also look at Vector graphics packages, which allow you to define brush strokes mathematically, making for nice scalable graphics.
I have also done a video quickly showing each application running, so you can have a better idea what the experience is like. I only spend a couple minutes with each, but a few minutes is often all you need to decide if something is right for you or not! All testing was done on a iPad Air, so if your device is slower, your experience may not be as good.
This video is a quick demonstration of every application mentioned below. You can view it directly here.
Ok, let’s jump right in:
iTunes Link
Cost: $9.99
Screenshot(s):
Be careful when it comes to the Photoshop products, Adobe have released a number of Photoshop branded iOS projects and most of them are focused on photo manipulation and most useless to you. The version you want is Photoshop Touch. This is the mobile version of the venerable desktop Photoshop application. While certainly stripped from it’s desktop counterpart, it is still impressively capable.
One immediately useful feature is the allowed canvas size. Earlier versions where limited in canvas size, while now you can successfully create a 4096x4096 texture, which is the reasonable upper limit of what you will ever want to create. I will admit though, at this size things can get a bit sluggish at times, although the painting experience is just fine, running tools like magic wand select or applying fx can bring up a wait logo. Speaking of which, that is two areas where Photoshop really shines compared to it’s peers. The selection tools are great, including a magic wand, scribble selection, polygon, lasso, etc select and deselect tools.
Tools are solid but not exceptional. Feedback is great, there is no lag even on large files. Navigation takes a bit to get used to, but is quick once you know what you are doing. It has a few standout tools, such as the clone and heal brushes, which are somewhat rare. Otherwise you are left with paint, burn, smudge, spray and erase as your primary art tools. You do however have a ton of fine control over each tool, such as brush patterns, angle, scatter, size, flow and transparency. You have to set it up yourself, but you can emulate basically any brush type you desire.
Of less use to game art, but still occasionally valuable, Photoshop Touch has 16 built in adjustments like Black&White, Shadow/Highlight, etc. There are also 30+ filters built in, such as Sharpen, Drop Shadow, Blur, Glass, Comic, Watercolor, Sephia, etc. There are also an impressive number of manipulation tools tucked away in here for cropping, scaling, rotating, transforming(warping) and fairly solid text tools as well.
Where Photoshop Touch really shines is layers. You can have several layers, either new, cloned or imported from existing media. Layer control is solid making it easy to move layers up, down, merge and delete as well as altering the opacity. Additionally layers can be normal, darken, multiply, lighten, overly, subtracted, etc. Fx can be limited to an individual layer.
While Photoshop Touch may not be the program you create your art in, it should probably be in your toolbox for the sheer amount of flexibility it gives you. In terms of alterations and manipulation of images, it can’t really be touched. The selection, copy/paste, import and layering tools are easily the best out of any tool I look at.
In terms of getting your work out at the end of the day, unfortunately there is no direct Dropbox integration, but that isn’t really surprising given Adobe have their own cloud storage system, Creative Cloud. In addition to their cloud offering, you can also save to the local photo roll. There is however a Share option, allowing you to export the file ( as JPG, PSD, PSDX or PNG ) to just about any iPad application ( including dropbox ) or to email, printers, etc. However the process is remarkably slow and sometimes simply doesn’t work. At the end of the day, you can get just about anything in and out of Photoshop Touch that you would expect, but it can be awfully slow at times.
I suppose it’s fair to point out, it’s actually Photoshop Touch I used to resize and export the screenshots from my iPad to my Mac while creating this post. It’s just a handy tool to have.
Cost: $3.99 for Pro Tools
This product is a bit difficult for me to cover, as the version I have doesn’t actually seem to exist anymore. At some point they moved from a premium iPad only product (Sketchbook Pro) to an freemium model, where you can download the base (Sketchbook Express), then for $3.99 unlock the premium tools. I believe at the end of the day, they become the same product, but if there are minor differences, I apologize now. As I am not entirely certain what is in the free vs pro edition, the below will be describing the complete purchased product.
Sketchbook is exactly what the name suggests, a virtual sketchpad. It’s an impressive one at that. It’s got a minimal interface that get’s out of your way ( all of the things you see in the above screenshot are brought up by pressing the circular icon. Let go and it’s just you and your drawing. Drawing tools are pretty typical with pen, pencil, market, highlighter, erase, smudge and airbrush available, plus the ability to choose custom brushes/stamps from probably 100+ presets, including common pencil type (2H, 8B, etc) and oddly enough, so clip art. Responsiveness while drawing is instant. Using a quick pop up tool you are able to alter your brushes dimensions and opacity with a single touch. One nice feature, lacking from many similar products, is the ability to draw lines and shapes, although curves didn’t seem to work.
There are a couple unique to Sketchbook features, of varying levels of usefulness. There is a symmetry draw mode, enabling you to draw mirrored about a centre point. You can also do time lapsed recording and collaborative sketching with someone else with their own copy of Sketchbook. Sketchbook also has decent text creation abilities built in. Most importantly (to me), Sketchbook also has layer support, although nowhere near that of Photoshop Touch. Even just a single layer to enable tracing for animation is invaluable.
You can export from your gallery directly to the local photo gallery, to iTunes, email, etc… as well as directly to Dropbox. You can create images up to 1800x2400 in size, although the size of image limits the number of layers. A 1800x2400 image can for example have 4 layers, while a 768x1024 image can have up to 18 layers. The largest image you are able to create is 2830x2830. No idea why it stops there… Even at that size, performance remains smooth.
Sketchbook is a great product for creating sketches, the brushes are natural, performance is good and tools are complete enough to recreate most anything. The interface is faster to navigate than Photoshop Touch, but it has a great deal less functionality, with absolutely no filters, effects, selection tools and minimal layer functionality. For drawing however, I would take it over Photoshop Touch every day.
Cost: $4.99
This is the most applicably named application I have ever encountered! It’s an amazing application for creating digital art, and it is horrifically rage inspiring! ArtRage attempts to recreate real world art process digital, and does an incredibly good job of it. You can choose your paper type (grain), metallic-ness then go to town using tools like paint, roller, trowel, crayons, markers, pens, erasers, pencils, airbrush and a couple different paint brushes. For each brush you can set a number of settings specific to each tool such as pressure, thinning etc. You really can recreate a very “painterly” feel. So why the rage?
Well that part is simple. This application lags. Always lags and I absolutely cannot use it for that reason. Regardless to the complexity of your scene, your paint brush will always been a half a second behind where you are touching. Of all the applications I looked at, this one had by far the worst performance. If you can handle the delay while drawing, this application is certainly worth checking out, especially if you are looking for a natural media look. I personally cannot get over the lag.
From a technical perspective, Art rage allows a maximum canvas size of 2048x2048. It supports layers with an absolute ton of blending modes. There is no manipulation tools for selection, transformation nor any filters or effects. This is a painting program focused on painting and nothing more. It however probably does the best job of recreating brushes and paper in the digital world. It has the ability to export to Dropbox as well as save locally, but not to send to other applications on your iPad.
Cost: Free for Base + up to $3.99 to add tools
Made by Wacom, famous for the titular Bamboo tablets. The free version ships with a pen, then for 99 cents each, or $3.99 for all, you can add tools such as Brush, Crayon, Pencil etc. It’s got a slick package, good export support (including Dropbox) and does feel like working in a notebook.
That said, it’s simply too limited to be used for much more than sketching. Lack of layer support, minimal dimension options, no selection tools, filters or advanced brushes.
Cost: Free then up to $6.99 for all tools
Paper is a somewhat famous application, as it has been used in Apple promotional materials. It is also incredibly basic, basically just trying to mimic the drawing on paper experience. As you can see from the screenshot above, I only have the free version installed. For up to $6.99 you can add other art tools like a pencil, marker, paint brush, colour mixer, etc. Export abilities are limited to save to camera roll and send to app.
The drawing is nice and natural, but it’s too limited to be of much use for game development. Additionally, it’s price is hard justified compared to similar applications. Can’t really recommend paper other than for sketching if the minimal interface floats your boat.
Cost: Free to start, up to $4.99 for all tools, plus text and layers
Sketches is very similar to Paper and Bamboo Paper, in that it emulates a classic sketchbook. However the major exception is, with the premium purchase of $4.99 you gain among other things, layer support. Additionally the free version includes a complete set of tools, but limits the customizability of each. It contains all of the same tools as the previous two applications. The interface can be minimized by swiping aside panels. Navigation is multitouch based and to be honest is confusing as hell until you get used to it.
You are able to export to a number of formats, including Dropbox. You can select from a number of paper types, but unfortunately have very little control of resolution of the image, maxing out at that of a Retina iPad.
Of all the sketch based drawing applications, this one was easily my favourite, even with it’s somewhat unnatural navigation interface.
Cost: Free to $6.99 for full toolset
Of all the applications I’ve covered yet, Concepts is probably the most unique, and in some ways, the most powerful. Designed for creating sketches for concept art, it comes with several tools you haven’t seen yet. For example, there is a tool for tracing lines and arcs. There are snapping tools for snapping to the grid. Speaking of which, the grid can be turned off, on, and set to a number of different densities and formats, including isometric, which can be invaluable if that is the art style you are going for. There is full layering support, but they are intended for organization/tracing/layering and not artistic effects. Beyond transparency, there are no blending options for layers.
Artistic tools are somewhat limited to pens, pencils and markers. More natural art style would be hard to achieve in this program as a result. That said, the color system is amazing and emulates COPIC markers, allowing you to create pretty much the same tools that concept artists use.
For precision works, this is hands down the best choice out there. For drawing striaght edges, lines and curves, only the next option comes close. For more painterly effects, this is a poor choice. There is no filter or fx support. Export support is robust, and actually capable of exporting to DXF ( Autocad ), SVG ( Vector ) and PSD formats. You can also export as an image to the local camera roll, as well as export to Dropbox and others ( including somewhat shockingly, Adobe’s Creative Cloud ).
If you are doing concept art, or going for a technical look, this is hands down your best choice.
Cost: Free!
Adobe make a number of products of a very similar nature, Adobe Ideas, Adobe Lines and Adobe Sketch. Adobe Draw however ties together the best of each of them and has some very interesting abilities. By far and away the most important feature is the ability to use an angle or french curve ( like shown above ) to draw straight lines and curves. The actual drawing features are somewhat limited, mostly pen/pencil style drawing implements. You can control the brush tip size, colour and opacity and that’s it. There is layer support buts it’s tied to each tool somewhat oddly. The response is quick, the interface is nice and clean and though limited, the brushes are different enough to be useful.
All that said, this application is free. It’s good and it’s free. In some ways the drawing tools are amongst the best available. The angle/french curve functionality is exceedingly well implemented, much better than the curve support in Concepts, which is the only other program that offers similar functionality. Export functionality is fairly traditional, you can save to the local camera roll, upload to creative cloud and hit the standard share with targets, including Dropbox. Unfortunately it seems you have little (no?) control over the canvas size.
I don’t really understand the business model here, but free is a wonderful price. Be sure to check this one out.
Cost: Free *for a limited time, seemingly forever
I will be honest, I had a great deal of trouble navigating my way around this application. Even saving a file was somewhat perplexing. In many ways it has an interface like many desktop art packages like GIMP or Paintshop. That interface doesn’t necessarily work on a touch device.
There is actually a LOT of functionality packed in here, more so than most of the packages listed here, except perhaps Photoshop Touch. There is full layering support, but limited to 3 layers ( I think, who knows whats behind that interface that I couldn’t figure out! ), and all kinds of blending functionality between layers. Art tools are somewhat limited. I think everything you need is in here, but accessing it is somewhat of a trick.
That said, it’s listed as free for a limited time, and that was over a year ago. It doesn’t seem like this application is still being developed, so it’s a completely free option. For that reason alone you should check it out, the interface might click for you better than it did me. If nothing else, the price is right!
This application is unique in the list as it is specifically aimed at low resolution ( 8/16bit ) pixel art. At it’s core it’s a fat bit grid editor. You draw pixel by pixel, although it does have some handy tools like fill, line drawing, etc. There is also layering ability, but there is no blending between layers, it’s literally just stacked pixels. You work in a massively zoomed in window, but you can preview the end result as you work. There are also tools for doing frame by frame animation sequences, including onion skinning functionality. This part of the interface can be a bit clunky.
One very unique thing about sprite something is, it has a simple level editor built in as well. See the video for more details. Export is limited to email and local camera roll. If you are working in fat bit pixel style, this is your best ( and almost only ) option.
I’m just mentioning this one for thoroughness. If you are reading this as part of the overarching iPad game creation tutorial, there is a simple pixel art example “Spritely” built into Codea. Its another fat bit grid editor for pixel art. It’s very simple but may be enough for you if you have simple requirements.
Obviously not recommended to non-Codea users.
Vector graphics applications work a bit differently than raster packages we described above ( except perhaps Concepts which is a cross between Raster and Vector graphics ). Raster applications work on a pixel by pixel basis. Vector graphics on the other hand work on mathematic formulas representing each brush stroke. This gives you a bit less fine control over the end results, but allows you to scale up or down to any graphic resolution you want.
Cost: Free and Open Source
Completely free and open source, Inkpad is a powerful vector graphics package. If you’ve ever used Inkscape on a PC, you know what you are in for. You draw using paths, manipulate curves either straight line or bezier for curved edges and using simple geographic shapes, build, color and layer them to create more complex images.
Inkpad has full layer support, although they don’t really effect each other like in raster packages. You can however bring in a standard graphic as a background or to trace over. Inkscape supports saving as an image locally or exporting as PDF or SVG.
Once again, it’s completely free. Free is nice.
Cost: $8.99
iDraw has one of those iNames iAbsolutely iHate, but don’t judge the package by it’s name. I absolutely love this package and strongly recommend it to anybody reading this that wants to work with vector graphics. This application works very similar to Inkpad, except with more functionality, more polish and a higher price tag. I have struggled in the past with vector graphics applications such as Inkscape ( unwieldy ) and Illustrator ( overwhelming ) and iDraw is the first one that just “clicked” for me. Once again, the basic concept remains the same, you draw shapes using lines (paths), fill those paths with colors or gradiants, and layer them to make more complex shapes. One major difference between this and Inkpad is the free form drawing tools, that allow you to use it much more similar to a traditional drawing package.
iDraw is also available as a complete application on Mac and files are interchangeable. This page has full layer support and is capable of saving directly to Dropbox. Files can be exported as iDraw, PDF ( very very very useful with Codea as we will soon see ), SVG, PSD, PNG and JPEG, with resolutions of 72, 144 or 300dpi.
This is only a small subset of graphics packages available on iPad, but does represent a large cross section of them, as well as featuring most of the “big names”.
Myself, if I were to only be able to keep a couple applications, my personal choices would
|
https://www.gamefromscratch.com/?tag=/2D&page=24
|
CC-MAIN-2019-43
|
refinedweb
| 4,854
| 64.41
|
Программная нанопоэма “можно” здесь.
Internet of Problems
For a while now, it’s considered to be progressive, wise and plain old cool to blame the internets for all the problems. The fact that it became cool can be partly explained by the fact that internet is the mainstream today (unlike the days when it was most loudly criticized by conservatively-minded people), so criticizing it gives you coolness points.
Half-march update
Changes are
Changes are really your own self-centered eclectic lies universal self.
One of the least original meta-jokes
Theory of Kyoufu
A few book titles:
What awaits humanity, pt I: extinction
…between true utopia, assorted distopias and extinction.
The art of provocation: external eclecticism vs inner consistency
That rare case
Internet-era books on ethics hidden behind a paywall are a rare example of books that can be easily dismissed by (otherwise anecdotic) “didn’t read, but disapprove” formula.
VHEMT site review
“May we live long and die out”
Consequentialism vs its promotion
This post presents a sketch for an “almost mathematical” proof that promoting consequentialism is in fact against consequentialism. By no means i suggest that this thought is original, but i haven’t stumbled upon it in such formulation.
Immortality
After spending years of my youth on contemplating immortality, i came to a simple conclusion: for any meaningful definition of immortality that is worth achieving, it is impossible to achieve immortality. In other words, if definition of immortality has something in common with common sense understanding of the word and it looks like a desirable state, every entity either is inherently immortal or will never be.
Communication amplifies your loneliness
While i won’t claim that to be absolute truth, i think many might find it interesting to consider.
Writing spaghetti: don't underestimate complexity of simple tasks
tl;dr don’t code in an ineffective way if you don’t enjoy it
0net site migration: 91% complete
There are still a few things to polish, but it mostly works.
Which means that the era of that weird comment system employed on this page is coming to an end: since it was never quite used and incompatible with ssl, i’ll put it down right after adding commenting capability to 0net version.
Sometimes i wonder..
..how many people care to understand their native language.
Writing
Yesterday i got back to writing my nanowrimo-2017 novel. Today i’m writing this post. Who knows, maybe i can get into the habit of writing regularly?..
The obligatory empty page
Finally, i’m in the process of moving into 0net. I had to start somewhere and decided to create an obligatory empty page. As the title suggests, there’s not much interesting there, but i’ll be grateful if you’d seed it so that if there are any technical issues i can solve them now instead of dealing with them when publishing actual content.
How vulnerabilities can improve performance
While some are complaining that patch for recent cpu vulnerability slows down their computers, the only thing i noticed is that my performance seems to increase. With more JS disabled and less browser uptime i have more free ram available and am less distracted by interwebs.
Declaration of intent
Even if i’m not big fan of new year concept, there’s still one thing i really wanted to do “this year”: migrate away from the old rotten centralized web into something better, or, at the very least, start such migration.
However, since i’m constantly postponing any efforts in this direction, the only thing i can actually get done before 2018 unleashes on us is start cross-posting into ZeroMe.
So, here it is.
Pulse Of Life
I’m not keeping my promises about showcasing RainyNite, but at least here’s another screenshot. This time from upcoming v0.6 “Pulse Of Life”.
RainyNite July update
So it’s been a long time since my last post again; i guess i just can’t make a habit of regular updates.
In the meantime, there were three RainyNite “releases” and i’m preparing next one (all planned features are already there, but i need a new cover image).
Introducing RainyNite #1
This is the first post in series about RainyNite in which i’ll be trying to explain why it came to be and why it might be relevant in the age of many free software animation tools, as well as whether you should be interested in it or not.
Few humble announcements
1) I participated in LD38 and took part in creating this game.
2) RainyNite source is published here. I know i should put build instructions there. Will do that soon.
3) I have ZeroMe account now (caryoscelus@zeroid.bit). Not sure how much i’ll be using it, but expect more ZeroNet news from me anyway.
Random web parallax video demo
WARNING: requires support for transparent webm videos (firefox: 53, blink seems to have introduced it long ago) and video autoplay enabled!
Semi-linked region/outline in Synfig
Here’s a short tutorial on how to get a shape in Synfig with partial outlines linked to it. This is pretty much a workaround, but the fix might require either a lot of major changes or a lot of pain with inelegant code.
Synfig vector morphing without waypoints
Since i’m reworking Synfig time & animation subsystem, i’m getting rid of current hard-coded & hard-to-edit interpolation via waypoints and replacing it with lightweight valuenodes.
13 TET
Writing 13 TET music turned out to be easier than i thought.
Random demo (continued)
As i promised, here’s a video with some explanation about time patch layers.
Random demo
As i mentioned before, i’ve been working on Synfig lately. And here’s a little demo of the Time Patching feature, which is the most interesting thing of what i am doing.
It's been a while
It’s been a while since i updated this blog and/or site, but lots of awesome stuff is going to happen soon nevertheless.
taking out the trash
Finally published my second album: taking out the trash. Personally i find it more interesting, both to make and to listen. Currently it can be found here on jamendo, but hopefully i will upload it somewhere else as well.
Third episode
Third episode of scandia letters is ready! And second episode is translated into english, by the way. Download here.
Next episode might be delayed a bit due to ludum dare, but i hope not.
Python import magic
Ever wondered how to make package export both its submodules for use with
from package import *and re-export module members for use with
from package import ClassName? Well, you can of course specify
__all__and then do
from .module import *for each module, but that’s repeating yourself!
Instead, here’s a black magic (not really) technique which requires to specify each module only once.
Second episode
With a small delay, second episode of scandia letters is out!
Self-translation is pain
It has a combination of translation pain and reading-your-own-text pain, plus pain of witnessing your text being crippled. Especially so if you are less than perfect at target language.
scandia letters
Without any particular (except for the obvious) reason, i’ve chosen today to officially launch my new project: scandia letters (or письма скандии in russian, in which it is being written). Why yet another new project? Well, because i tend to extend and inflate any project i start. But this one is a series, which makes it easier to follow deadline and actually publish something.
Since it is started, you can now expect more posts here, official announcement on lemmasoft forum
and perhaps a bit of downtime if github servers would be unable to handle all the sudden traffic(just wanted to insert some unfunny joke here, sorry).
The first episode (currently in russian only, but english translation will follow) is available here. Currently you need to have Ren’Py launcher to run it, since i’m doing this in a rush, but more user-friendly release is to be made soon.
Redesign is coming
I’m too lazy to put out a countdown for you, but it’s going to happen soon. It can be any minute now since i actually have satisfying design and only want to finish updating info as well before launching new stuff.
Sometimes...
…nobody is hiding the truth from you, but you don’t even bother to check.
Ludum Dare 32
Continuing my last year’s “tradition”, i was doing april LD solo. Continuing long-time LD tradition, theme was horrible. But apart from those traditions i didn’t have much time to participate, almost entirely missing first day.
meanwhile
Meanwhile, i finally released album. Which is pretty much a bunch of random tracks, but who cares. Most important that i don’t have to care about polishing and publishing it, so time to move on.
NaNoRenO fail
Time to admit it, perhaps. I could’ve built a working demo, but it would be too unpolished for publishing on NaNoRenO.
If anyone interested (hello imaginary friends), there still will be demo sometime soon, so keep waiting. Maybe i’ll even make a teaser video and loud announce.
Future
Your future is what you make for yourself.
Don’t gamble on someone waltzing in your life and fixing it for you.
Don’t try to force someone to fix it — that will fail or backfire.
Learn to be yourself, to be whatever you want to be, without relying on anybody.
LD 31: games worth mentioning.
NOTE: This entry will be updated as i find more interesting stuff.
Chess Chatter is here
Too tired of LD to write anything interesting about it yet, so i’ll just mention it was great. And the result of our efforts is here, but that’s just where you came from, isn’t it?
Will write more detailed report later. You can go read game meanwhile.
Ok, everybody! We're taking lots! Players, please come and take your destiny!
So yeah, LD 31 started with an unfortunate theme. But i’m still doing a small VN cooperating with lonely2012 and maybe someone else from Transient team.
Progress so far: i have an idea and a bit of text, lonely2012 is doing character art.
UPD: Repo is here:
Finished MiniLD 55
The game is here. It’s text-only game in chat with some roguelike influence. Not polished and probably quite boring.
Source code here. Mostly PureScript with helper sh+Python script. Note that code is somewhat imperative and unsafe (e.g. everything related to random), so large parts should be completely rewritten…
It’s unlikely that i’ll continue active development, unless there will be some interest in it. But it was nice coding experience.
Checking out PureScript
In search for something better than raw JavaScript which i used as fast solution, i found out about PureScript - “small strongly typed programming language that compiles to JavaScript” as described on it’s site. It’s highly inspired by Haskell (and written in it) which already means it’s pretty good. Not sure if it’s better than some Haskell-to-JS solution, but i’m giving it a try with new comment-anything frontend.
comment-anything update
comment-anything (which is my bicycle project for commenting on this site) is updated and now is more reliable (i’m doing regular backups now).
It is still ugly, buggy and insecure, but who cares?
What is this - obvious riddle for imaginary readers
Here’s an image from soon-to-be-released something. Try to guess what it is and win an unique chance to see it earlier!
Side note: if you can't draw poses
Yeah, then you should study, study, study, draw, draw, draw until you get it right. But if you can’t afford that, there are always ways around. One of them is described below.
Working on TDA again
It’s been quite a while since i worked on “The Day After” (see projects for more info), but it’s not dead. Starting from today i’m actively working on it again and hopefully will produce demo soon. Details to be revelaed later. I’ve got some work to do ^_^
Added commenting possibility
Added test possibility to add comments. Will be improved eventually, but at least this works for now. Proper backuping is not set yet, so don’t rely too hard on comments permanency, but eventually it will al be sorted out.
First post!
Well, all the interesting stuff will be posted later. This post is here just to mark creation of blog :)
New 0net-based comment system is in development
|
http://caryoscelus.github.io/blog/
|
CC-MAIN-2018-13
|
refinedweb
| 2,136
| 62.88
|
Networking in Docker is the means through which containers communicate with each other and external workloads. The most distinctive and flexible feature of Docker containers is the ability to network with the workloads that might or might not be of Docker in nature. Containers also work independently of the nature of the host they are deployed on, be it Windows, Linux, Mac OS, or a mix of two.
The scope of this article
This article gives the basic information on networking in Docker and Drivers for networking. It helps you do some basic exercise on networking, like Listing all networks, creating a new network and Inspecting a network, each with an example. It also explains the details of networking on each of the network types, restricting the explanation to the default types.
Table of Contents
When a Docker is installed, three networks are created automatically for the Docker. They are the bridge, none, and the host. You can see these networks using the command, docker network ls.
A docker networking is a pluggable system. You can plug a network for a Docker using the corresponding driver. Several Drivers are present by default and provide core networking. For the default networks (bridge, none and host), the corresponding drivers present are the bridge, null, and the host.
When you want to specifically provide a network to your container, you can use the flag, --network to specify your choice.
BRIDGE
The bridge network is the docker0 network present on Dockers and the Daemon connects all containers to this network by default. If you run the command, ipaddr show on the host network, you can see bridge, as the default network displayed.
Note: ipconfig is a deprecated command now. You can alternatively use ip a as a shorthand notation of ipaddr show.
The daemon connects all the containers of the host to bridge by default, by creating 2 virtual peer interfaces, where one of the interfaces becomes the eth0 of the container and other in the namespace of the host. An IP address will be assigned to the bridge whilst, the interfaces are created.
NONE
When you explicitly specify the network to be none, the containers are added to another stack called as none stack. This lacks a specific network interface. This is useful in two cases:
HOST
The host network adds the containers to the network stack called host. This stack keeps no isolation between the containers and the host machine. Since the container shares the namespace with the host, it is exposed to the public directly. This effectively implies that the container and the host share the same IP address. Since there is no overhead routing involved in the networking, this is faster than the bridge networking. However, the security implications are to be considered here, as it is directly exposed to the public.
When it comes to the possibility of configuring the network, bridge and user-defined bridge networks are only available. The none and host networks are not yet configurable on Docker.
Drivers make the networking subsystem pluggable on Dockers. There are few drivers that exist by default, while few are user created. Below are some of the default drivers present, and are explained in brief with their functionalities.
Below is a list of basic operations used in Docker Networking, which are essential to be known, in order to establish successful networks on Docker systems.
Listing All Docker Networks
Command Syntax:docker network ls
Options:None
Return Value:Displays all the networks connected on the Host.
Example
Command: $ docker network ls
Output:
NETWORK ID <span style="white-space:pre"> </span>NAME <span style="white-space:pre"> </span>DRIVER 7fca4eb8c647 <span style="white-space:pre"> </span>bridge <span style="white-space:pre"> </span>bridge 9f904ee27bf5 <span style="white-space:pre"> </span>none <span style="white-space:pre"> </span>null cf03ee007fb4 <span style="white-space:pre"> </span>host <span style="white-space:pre"> </span>host
Inspecting a network lets you get more details of a particular network of interest on the Host. For example, you will get information on the containers connected to the network with port and IP address details.
To inspect, you have to specify the name of the network. In the below syntax, "networkname" should be replaced by the network name of choice.
Command Syntax:docker network inspect networkname
Options: networkname should be the name of the target network
Return Value: Returns all the details associated with the network.
Example:
Command: $ docker network inspect bridge
[ { <span style="white-space:pre"> </span>"Name": "bridge", <span style="white-space:pre"> </span>"Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f", <span style="white-space:pre"> </span>"Scope": "local", <span style="white-space:pre"> </span>"Driver": "bridge", <span style="white-space:pre"> </span>"IPAM": { <span style="white-space:pre"> </span>"Driver": "default", <span style="white-space:pre"> </span>"Config": [ <span style="white-space:pre"> </span>{ <span style="white-space:pre"> </span> "Subnet": "172.17.0.1/16", <span style="white-space:pre"> </span>"Gateway": "172.17.0.1" <span style="white-space:pre"> </span>} <span style="white-space:pre"> </span>] <span style="white-space:pre"> </span>}, <span style="white-space:pre"> </span>"Containers": {}, <span style="white-space:pre"> </span>"Options": { "com.docker.network.bridge.default_bridge": "true", <span style="white-space:pre"> </span>"com.docker.network.bridge.enable_icc": "true", "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", "com.docker.network.bridge.name": "docker0", <span style="white-space:pre"> </span>"com.docker.network.driver.mtu": "9001" <span style="white-space:pre"> </span>}, <span style="white-space:pre"> </span>"Labels": {} } ]
Note: IP address will be different in your results.
Creating your own Network
You can create a custom network for your container before launching it. This can be done using the following command.
You can specify the driver for the network to be hosted on in the "drivername" space.
Command Syntax:docker network create –-driver drivername name
Options:“drivername” substitutes for the name of the driver used for the network.
“name” substitutes for the name of the network to be created.
Return Value:It will return a long string ID of the new network created.
Example:
Command: $ docker network create –-driver bridge new_nw
Output:
f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f
Now, you can attach the new network while you launch the container. This can be done using the following command for an Ubuntu container:
Command:$ docker run –it –network=new_nwubuntu:latest /bin/bash
In order to see the details, inspect the network and see that the container is attached to this network.
Once we are aware of employing basic commands to create or inspect a network on Docker, let us go a step ahead to understand ways of networking in Docker. This enables us to understand how different Docker architectures can be dealt within networking. Having said that, there are 2 ways of doing networking in Docker, they being:
Networking on a Single Host
On a single host, generally, the containers communicate with each other through the IP address obtained from the default bridge network. The other ways of communication include the use of the container names as a key to connect. This is the more efficient way of communication as IP address will be assigned dynamically during container creation and name will be easier to use for networking.
Multi-host Networking
This is very different from the Single-host Networking in terms of the connection and performance. Containers can be spread across the hosts in a multi-host system. The networking will be established between the containers and the multiple hosts and among the containers of the same host. Having said this, in order to detect the hosts, service discovery plays a vital role.
Service discovery helps you to get a hostname and hence the IP address.
In order to work in a multi-host mode, you have to consider two options:
Networking Tutorials
We are dealing with details of networking with only the default networking techniques. The user-defined versions of each networking technique are not handled in this content. There are 4 different types of default networking available on Docker. They are being:
The section below gives an insight into networking in each of the type.
This information is restricted to the details on networking with the standalone Docker containers. This can be configured on Windows, Mac and Linux OS.
We need two alpine containers to test networking and communication in this scenario. Pre-requisite is the installed Docker which is up and running, on your system.
Follow the steps below:
Command:$ docker network ls
Output:
NETWORK ID <span style="white-space:pre"> </span> <span style="white-space:pre"> </span>NAME <span style="white-space:pre"> </span>DRIVER <span style="white-space:pre"> </span>SCOPE 17e324f45964 <span style="white-space:pre"> </span>bridge <span style="white-space:pre"> </span>bridge <span style="white-space:pre"> </span>local 6ed54d316334 <span style="white-space:pre"> </span>host <span style="white-space:pre"> </span>host <span style="white-space:pre"> </span>local 7092879f2cc8 <span style="white-space:pre"> </span>none <span style="white-space:pre"> </span>null <span style="white-space:pre"> </span>local
The default bridge network is used to connect to two containers.
Commands:
$ docker run -dit --name alpine1 alpine ash
$ docker run -dit --name alpine2 alpine ash
Explanation
-ditflag stands for detached, interactive, and TTY. The flag indicates that it is detached in the background, has an interactive tool to type and TTY, as you can see the input and output right away.
Since you have started it detached, you will not be connected to the container. While, since --network flag is not used, containers connect through bridge network by default.
Now, list the containers to see that 2 containers are running successfully.
Command:$ docker container ls
You can now inspect the bridge network, to see the details of containers connected to it.
Command:$ docker network inspect bridge
Output:
[ { <span style="white-space:pre"> </span>"Name": "bridge", <span style="white-space:pre"> </span>"Id": "17e324f459648a9baaea32b248d3884da102dde19396c25b30ec800068ce6b10", <span style="white-space:pre"> </span>"Created": "2017-06-22T20:27:43.826654485Z", <span style="white-space:pre"> </span>"Scope": "local", <span style="white-space:pre"> </span>"Driver": "bridge", <span style="white-space:pre"> </span>"EnableIPv6": false, <span style="white-space:pre"> </span>"IPAM": { <span style="white-space:pre"> </span>"Driver": "default", <span style="white-space:pre"> </span>"Options": null, <span style="white-space:pre"> </span>"Config": [ <span style="white-space:pre"> </span>{ <span style="white-space:pre"> </span>"Subnet": "172.17.0.0/16", <span style="white-space:pre"> </span>"Gateway": "172.17.0.1" <span style="white-space:pre"> </span>} <span style="white-space:pre"> </span>] <span style="white-space:pre"> </span>}, <span style="white-space:pre"> </span>"Internal": false, <span style="white-space:pre"> </span>"Attachable": false, <span style="white-space:pre"> </span>"Containers": { "602dbf1edc81813304b6cf0a647e65333dc6fe6ee6ed572dc0f686a3307c6a2c": { <span style="white-space:pre"> </span>"Name": "alpine2", <span style="white-space:pre"> </span>"EndpointID": "03b6aafb7ca4d7e531e292901b43719c0e34cc7eef565b38a6bf84acf50f38cd", <span style="white-space:pre"> </span>"MacAddress": "02:42:ac:11:00:03", <span style="white-space:pre"> </span>"IPv4Address": "172.17.0.3/16", <span style="white-space:pre"> </span>"IPv6Address": "" <span style="white-space:pre"> </span>}, <span style="white-space:pre"> </span>"da33b7aa74b0bf3bda3ebd502d404320ca112a268aafe05b4851d1e3312ed168": { <span style="white-space:pre"> </span>"Name": "alpine1", <span style="white-space:pre"> </span>"EndpointID": "46c044a645d6afc42ddd7857d19e9dcfb89ad790afb5c239a35ac0af5e8a5bc5", <span style="white-space:pre"> </span>"MacAddress": "02:42:ac:11:00:02", <span style="white-space:pre"> </span>"IPv4Address": "172.17.0.2/16", <span style="white-space:pre"> </span>"IPv6Address": "" <span style="white-space:pre"> </span>} <span style="white-space:pre"> </span>}, <span style="white-space:pre"> </span>"Options": { "com.docker.network.bridge.default_bridge": "true", "com.docker.network.bridge.enable_icc": "true", <span style="white-space:pre"> </span>"com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", "com.docker.network.bridge.name": "docker0", "com.docker.network.driver.mtu": "1500" <span style="white-space:pre"> </span>}, <span style="white-space:pre"> </span>"Labels": {} } ]
This output displays the information about the IP address of the gateway between the host and the network which is 172.17.0.1
It also shows the 2 containers with some information including the IP address of each. They being: 172.17.0.2 for alpine1 and 172.17.0.3 for alpine2.
The containers are now running in the background and are not attached. Use docker attachto connect to alpine1.
Command:$ docker attach alpine1
Output:
/ #
The "#" indicates that you have become a root user inside the container. To see more about the network interfaces of alpine1, you can use the command ipaddr show.
In order to check for the connectivity inside the alpine1, use the following command to ping google to get a response.
Command:# ping -c 2 google.com
-c 2 indicates that the attempt is limited to 2 times while pinging.
You might see a similar response
Output:
PING google.com (172.217.3.174): 56 data bytes 64 bytes from 172.217.3.174: seq=0 ttl=41 time=9.841 ms 64 bytes from 172.217.3.174: seq=1 ttl=41 time=9.897 ms --- google.com ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 9.841/9.869/9.897 ms
Now, since the network connectivity is verified, try to ping the second container with its IP address.
Here is how you do it:
Command:# ping -c 2 172.17.0.3
Output:
PING 172.17.0.3 (172.17.0.3): 56 data bytes 64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.086 ms 64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.094 ms --- 172.17.0.3 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.086/0.090/0.094 ms ------------------------------
Since you get a response, try connecting using the name over IP address.
Command:# ping -c 2 alpine2
Output:
ping: bad address 'alpine2'
This operation fails.
Commands:
$ docker container stop alpine1 alpine2
$ docker container rm alpine1 alpine2
Frequently Asked Docker Interview Questions
This is the technique of networking where there is no isolation of networks.
In this tutorial, let's learn to start anginx container which connects to port 80 on the Docker host. This connection has the same level of isolation as having nginx process run directly on the host and not in a container. However, the storage, process namespace and user namespace are all isolated from the host in this nginx process.
For this operation, port 80 should be free on Docker Host for connection.
Note: Host networking is currently available only on Linux, and is not supported on Windows, MacOS, or Docker EE for Windows Server.
Follow the steps below to establish a Host networking
Command:docker run --rm -d --network host --name my_nginxnginx
Explanation
--rm - removes the container once it stops.
-d - starts the container detached(in the background, as a process).
Examining the network stack. This is done as in the following:
Command:docker container stopmy_nginx
Overlay Networking is the technique that deals with the swarm of services. This can be achieved by using the default overlay network. It is the network that the Docker sets up automatically when you join a swarm. However, it is not the best option for production scenarios.
In order to configure overlay networking, you should at least have a single node swarm. This is achieved by initiating the Docker and executing docker swarm init on the host.
In this method, you will learn about how an alpine service created will work from the individual container point of view.
Pre-requisites
This tutorial demands 3 virtual or physical Docker hosts which are connected on the same network with no firewall. All should have Docker 17.03 or higher versions installed. The 3 hosts will be referred to as the manager, worker-1, and worker-2. Manager is meant to manage the swarm and run the service while workers will just run the services.
In case you don't have three hosts, you can set them up on Ubuntu cloud services like Amazon EC2, where all the communications are allowed on the same network and then follow installation instructions for Docker CE on Ubuntu.
Procedure for Networking
Goal:
At the end of the networking, all the Docker hosts will be joined to form the swarm and the connection will be established using an overlay network called ingress.
Steps
Command:$ docker swarm init --advertise-addr=
The flag, --advertise-addr is optional, in case the host has only one network interface.
The output will have a token printed. Make sure that you store it in the Password manager, as it is used to join worker-1 and worker-2 to the swarm.
Command:$ docker swarm join --token
The flag, --advertise-addr is optional, in case the worker-1 has only one network interface.
Command:$ docker swarm join --token
Command: $ docker node ls
Output:
ID <span style="white-space:pre"> </span>HOSTNAME <span style="white-space:pre"> </span>STATUS<span style="white-space:pre"> </span>AVAILABILITY <span style="white-space:pre"> </span> d68ace5iraw6whp7llvgjpu48 *ip-172-31-34-146<span style="white-space:pre"> </span>Ready Active <span style="white-space:pre"> </span> Nvp5rwavvb8lhdggo8fcf7plg <span style="white-space:pre"> </span>ip-172-31-35-151<span style="white-space: pre;"> </span>Ready Active ouvx2l7qfcxisoyms8mtkgahw <span style="white-space:pre"> </span>ip-172-31-36-89 <span style="white-space:pre"> </span>Ready Active
You can now list the network interfaces on all the nodes and check that there is an overlay network called ingress and a bridge network called docker_gwbridge listed.
The docker_gwbridge connects the ingress to the Docker host's network interface for easy traffic flow between the swarm managers and the workers. By default, any swarm service created without the network specification is connected to ingress network. However, it is always insisted to create two separate overlay networks for a group of applications or tasks. In the following step, you will create two overlay networks, connecting two services to them.
Services and Overlay networks
1) First, create a new overlay network on manager called nginx-net.
To do this, use the command below::
Command:$ docker network create -d overlay nginx-net
You don't have to create overlays for the other two nodes, as they will be automatically created when the nodes run the corresponding service, requiring the overlay network.
2) Now, in order to have an open port for all networking scenarios, create a 5-replica Nginx service on manager, connected to nginx-net.
This service will publish the port 80 as public. Hence, all other services can communicate with each other, without having to open any other ports.
Command:$ docker service create --name my-nginx --publish target=80,published=80 --replicas=5 --network nginx-net nginx
Note: Only on manager, the services can be created.
If no mode is specified as an option with the flag --publish, then ingress will be considered by default.
This means that, if you browse for port 80 on any of the 3 nodes, you will be connected to one of the 5 services on port 80, even if there are no active tasks on the node you are browsing on.
3) In order to keep a track of the service set up, use the command:
Command:docker service ls
4) Now, inspect for nginx-net on all the 3 nodes. Note here that, you did not explicitly create the overlay network for the workers. However, Docker took care of it. Notice in the output for the containers and peers section. Containers will provide all the service tasks associated with the container which is connected to overlay network from that host.
5) Notice the information regarding the points and endpoints by executing the service inspect command on manager.
Command:docker service inspect my-nginx
6) Create a new network called nginx-net-2 and update the service to use this network. To perform the same, use the command below:
Commands:
$ docker network create -d overlay nginx-net-2
$ docker service update --network-add nginx-net-2 --network-rmnginx-net my-nginx
7) Run the command dockernetwork inspectnginx-net, to ensure that none of the containers are connected to this network.
Run docker network inspect nginx-net-2, to see that all service task containers are up and running on this network.
Note: The overlay networks are created automatically on the service tasks, however, it is not removed automatically.
8) The last step is to clean up the services and networks. Execute the following commands on manager. The networks on the other nodes will be removed upon instructions from the manager.
Commands:
$ docker service rm my-nginx
$ docker network rmnginx-net nginx-net-2
In this networking, the docker daemon routes the traffic to the corresponding container based on the MAC address received.
In this procedure, we will setup the macvlan network and attach containers to it.
In order to establish this networking, make sure that you have access to your physical networking equipment, as most cloud providers block macvlan networking.
This is supported only on Linux hosts with a minimum of Version 3.9 of the Linux Kernel. It is not available for the other OS like Windows, Mac, and Docker EE for Windows Server.
The example shown below assumes that the ethernet interface is eth0. If the device is configured with a different name, then use the corresponding name.
Procedure
In this example, the traffic flows through eth0 and Docker routes this to the containers based on their MAC addresses. The macvlan networking makes the containers seem to be physically attached to the network.
1) The first step is to create a macvlan network by name my-macvlan-net. Use the following command for the same.
Command:$ docker network create -d macvlan --subnet=172.16.86.0/24 --gateway=172.16.86.1 -o parent=eth0 my-macvlan-net
You can now list or inspect the network to ensure that the network exists and is macvlan network.
2) Now, start an alpine container and attach it to the network created. You can refer to the command below for the same:
Command:$ docker run --rm -itd --network my-macvlan-net --name my-macvlan-alpine alpine:latest ash
The --dit flag starts the container in the background in a detached mode.
The --rm flag removes the container when it is stopped.
3) Inspect the my-macvlan-alpinecontainer and see the MacAddress key within the network.
Command:$ docker container inspect my-macvlan-alpine
Output:
...truncated... "Networks": { "my-macvlan-net": { <span style="white-space:pre"> </span>"IPAMConfig": null, <span style="white-space:pre"> </span>"Links": null, <span style="white-space:pre"> </span>"Aliases": [ <span style="white-space:pre"> </span>"bec64291cd4c" <span style="white-space:pre"> </span>], <span style="white-space:pre"> </span>"NetworkID": "5e3ec79625d388dbcc03dcf4a6dc4548644eb99d58864cf8eee2252dcfc0cc9f", <span style="white-space:pre"> </span>"EndpointID": "8caf93c862b22f379b60515975acf96f7b54b7cf0ba0fb4a33cf18ae9e5c1d89", <span style="white-space:pre"> </span>"Gateway": "172.16.86.1", <span style="white-space:pre"> </span>"IPAddress": "172.16.86.2", <span style="white-space:pre"> </span>"IPPrefixLen": 24, <span style="white-space:pre"> </span>"IPv6Gateway": "", <span style="white-space:pre"> </span>"GlobalIPv6Address": "", <span style="white-space:pre"> </span>"GlobalIPv6PrefixLen": 0, <span style="white-space:pre"> </span>"MacAddress": "02:42:ac:10:56:02", <span style="white-space:pre"> </span>"DriverOpts": null } } ...truncated
4) Use docker exec commands to see how the conatiners see themselves in the network.
Command:$ docker exec my-macvlan-alpine ipaddr show eth0
Output:
9: eth0@tunl0: <broadcast,multicast,up,lower_up,m-down>mtu 1500 qdiscnoqueue state UP link/ether 02:42:ac:10:56:02 brdff:ff:ff:ff:ff:ff inet 172.16.86.2/24 brd 172.16.86.255 scope global eth0 valid_lft forever preferred_lft forever</broadcast,multicast,up,lower_up,m-down>
Command:$ docker exec my-macvlan-alpine ip route
default via 172.16.86.1 dev eth0
172.16.86.0/24 dev eth0 scope link src 172.16.86.2
5) First, stop the container and remove the network attached.
$ docker container stop my-macvlan-alpine
$ docker network rm my-macvlan-net
This brings us to the end of the networking on Dockers. The fundamental knowledge on Networking is ensured through this chapter. Make sure that these concepts are well ingrained, before delving deep in other aspects of Networking in Dockers.
List Of MindMajix Docker Courses:
Free Demo for Corporate & Online Trainings.
|
https://mindmajix.com/docker/networking-in-the-docker
|
CC-MAIN-2019-30
|
refinedweb
| 4,066
| 55.03
|
Related Titles
- Full Description
This book presents the C## using figures; short, focused code samples; and clear, concise explanations.
Figures are of prime importance in this book. While teaching programming seminars, Daniel Solis found that he could almost watch the lightbulbs#, this is just what youre looking for.
What youll learn
- Details of the C# 2010 language presented in a clear, concise treatment
- New features in the latest version of .NET,
- Namespaces and Assemblies
- Exceptions
- Structs
- Enumerations
- Arrays
- Delegates
- Events
- Interfaces
- Conversions
- Generics
- Enumerators and Iterators
- Introduction to LINQ
- Introduction to Asynchronous Programming
- Preprocessor Directives
- Reflection and Attributes
- Other Topics
- Source Code/Downloads
-
- Errata
Please Login to submit errata.On page Kindle Version from amazon Location 1423 of 11655:in Chapter 4, Class Members, Explicit and Implicit Field Initialization. There is an example of declaring fields that reads:
MyClass
{
int F1; // Initialized to 0 - value type
string F2; // Initialized to null - reference type
int F3; = 25; // Initialized to 25
string F4 = "abcd"; // Initialized to "abcd"
}
There is an extraneous ';' after F3 declaration..
On page 82:in the declaration of the Avg method inside the MyClass class, it reads :
return (input1 + input2) / 2.0F;
i would better write :
return (input1 + imput2)/2;
".0F" at the end of the line is useless. and, it provokes an error !
On page 83:the variables of the func1 function reads as follows :
float j = 2.6F;
float k = 5.1F;
i am not sure the "F" character actually makes sense.
i would better write :
float j = 2.6;
float k = 5.1;
|
http://www.apress.com/microsoft/c/9781430232827
|
CC-MAIN-2015-27
|
refinedweb
| 257
| 55.84
|
To add custom Java code to a Drools rule in Event Analytics in the Infor Grid:
- Write your Java code, compile it, and archive it to a JAR file:
// javac thibaud\HelloWorld.java // jar cvf HelloWorld.jar thibaud\HelloWorld.class package thibaud; public class HelloWorld { public static String hello(String CUNO) { return "Hello, " + CUNO; } }
- Find the host of Event Analytics as explained here, copy/paste the JAR file to the application’s lib folder in the Infor Grid, and restart the application to load the JAR file:
- Write the Drools Rule that makes use of your Java code, reload the rules, and test:
Thank you.
5 thoughts on “Java code in Event Analytics rules”
UPDATE: In the section “then”, we can use event.getElementValue(“CUNO”)
Hello Thibaud. Interesting post. Trying to figure out how it works but i am confused. I set up a rule (MPLIND, create record during PPS300) in analytics to post an event if PUOS=50 only.
If i put a system.out.println(event.getElementValue(“PUOS”)) in the rule, i see the debug line only for PO receipt in MPLIND status 50.
Then i set up MEC as a suscriber in the partner admin (M3:MPLIND:C::P2).
As a result, MEC receives 100% of events, regardless the status of the line added in MPLIND. If i do a receipt for a PO with direct put-away (status 75 immediately), an xml file is sent to MEC.
Where am i wrong ? How can we setup MEC to receive only events filtered by analytics ?
Thank you!
Bonjour Maxime. Supposing your Event Analytics rule does
event.postEvent ("Maxime"), then your MEC you should subscribe to
EventAnalytics:Maxime(not M3:MPLIND) — Thibaud
Excellent it works ! Thank you.
LikeLiked by 1 person
|
https://m3ideas.org/2015/04/28/java-code-in-event-analytics-rules/
|
CC-MAIN-2021-25
|
refinedweb
| 292
| 63.9
|
Meet the steam tables
Posted February 28, 2013 at 10:09 PM | categories: uncategorized | tags: steam, thermodynamics | View Comments
Updated June 26, 2013 at 07:00 PM
Table of Contents
We will use the iapws module. Install it like this:
pip install iapws
Problem statement: A Rankine cycle operates using steam with the condenser at 100 degC, a pressure of 3.0 MPa and temperature of 600 degC in the boiler. Assuming the compressor and turbine operate reversibly, estimate the efficiency of the cycle.
Starting point in the Rankine cycle in condenser.
we have saturated liquid here, and we get the thermodynamic properties for the given temperature. In this python module, these properties are all in attributes of an IAPWS object created at a set of conditions.
1 Starting point in the Rankine cycle in condenser.
We have saturated liquid here, and we get the thermodynamic properties for the given temperature.
from iapws import IAPWS97 T1 = 100 + 273.15 #in K sat_liquid1 = IAPWS97(T=T1, x=0) # x is the steam quality. 0 = liquid P1 = sat_liquid1.P s1 = sat_liquid1.s h1 = sat_liquid1.h v1 = sat_liquid1.v
2 Isentropic compression of liquid to point 2
The final pressure is given, and we need to compute the new temperatures, and enthalpy.
P2 = 3.0 # MPa s2 = s1 # this is what isentropic means sat_liquid2 = IAPWS97(P=P2, s=s1) T2, = sat_liquid2.T h2 = sat_liquid2.h # work done to compress liquid. This is an approximation, since the # volume does change a little with pressure, but the overall work here # is pretty small so we neglect the volume change. WdotP = v1*(P2 - P1); print print('The compressor work is: {0:1.4f} kJ/kg'.format(WdotP))
>>> >>> >>> >>> >>> >>> ... ... ... >>> The compressor work is: 0.0030 kJ/kg
The compression work is almost negligible. This number is 1000 times smaller than we computed with Xsteam. I wonder what the units of v1 actually are.
3 Isobaric heating to T3 in boiler where we make steam
T3 = 600 + 273.15 # K P3 = P2 # definition of isobaric steam = IAPWS97(P=P3, T=T3) h3 = steam.h s3 = steam.s Qb, = h3 - h2 # heat required to make the steam print print 'The boiler heat duty is: {0:1.2f} kJ/kg'.format(Qb)
>>> >>> >>> >>> >>> >>> >>> >>> The boiler heat duty is: 3260.69 kJ/kg
4 Isentropic expansion through turbine to point 4
steam = IAPWS97(P=P1, s=s3) T4, = steam.T h4 = steam.h s4 = s3 # isentropic Qc, = h4 - h1 # work required to cool from T4 to T1 print print 'The condenser heat duty is {0:1.2f} kJ/kg'.format(Qc)
>>> >>> >>> >>> The condenser heat duty is 2317.00 kJ/kg
5 To get from point 4 to point 1
WdotTurbine, = h4 - h3 # work extracted from the expansion print('The turbine work is: {0:1.2f} kJ/kg'.format(WdotTurbine))
The turbine work is: -946.71 kJ/kg
6 Efficiency
This is a ratio of the work put in to make the steam, and the net work obtained from the turbine. The answer here agrees with the efficiency calculated in Sandler on page 135.
eta = -(WdotTurbine - WdotP) / Qb print('The overall efficiency is {0:1.2%}.'.format(eta))
The overall efficiency is 29.03%.
7 Entropy-temperature chart
The IAPWS module makes it pretty easy to generate figures of the steam tables. Here we generate an entropy-Temperature graph. We do this to illustrate the path of the Rankine cycle. We need to compute the values of steam entropy for a range of pressures and temperatures.
import numpy as np import matplotlib.pyplot as plt plt.figure() plt.clf() T = np.linspace(300, 372+273, 200) # range of temperatures for P in [0.1, 1, 2, 5, 10, 20]: #MPa steam = [IAPWS97(T=t, P=P) for t in T] S = [s.s for s in steam] plt.plot(S, T, 'k-') # saturated vapor and liquid entropy lines svap = [s.s for s in [IAPWS97(T=t, x=1) for t in T]] sliq = [s.s for s in [IAPWS97(T=t, x=0) for t in T]] plt.plot(svap, T, 'r-') plt.plot(sliq, T, 'b-') plt.xlabel('Entropy (kJ/(kg K)') plt.ylabel('Temperature (K)') plt.savefig('images/iawps-steam.png')
>>> >>> <matplotlib.figure.Figure object at 0x000000000638BC18> >>> >>> ... ... ... ... [<matplotlib.lines.Line2D object at 0x0000000007F9C208>] [<matplotlib.lines.Line2D object at 0x0000000007F9C400>] [<matplotlib.lines.Line2D object at 0x0000000007F9C8D0>] [<matplotlib.lines.Line2D object at 0x0000000007F9CD30>] [<matplotlib.lines.Line2D object at 0x0000000007F9E1D0>] [<matplotlib.lines.Line2D object at 0x0000000007F9E630>] ... >>> >>> >>> [<matplotlib.lines.Line2D object at 0x0000000001FDCEB8>] [<matplotlib.lines.Line2D object at 0x0000000007F9EA90>] >>> <matplotlib.text.Text object at 0x0000000007F7BE48> <matplotlib.text.Text object at 0x0000000007F855F8>
We can plot our Rankine cycle path like this. We compute the entropies along the non-isentropic paths.
T23 = np.linspace(T2, T3) S23 = [s.s for s in [IAPWS97(P=P2, T=t) for t in T23]] T41 = np.linspace(T4, T1 - 0.01) # subtract a tiny bit to make sure we get a liquid S41 = [s.s for s in [IAPWS97(P=P1, T=t) for t in T41]]
And then we plot the paths.
plt.plot([s1, s2], [T1, T2], 'r-', lw=4) # Path 1 to 2 plt.plot(S23, T23, 'b-', lw=4) # path from 2 to 3 is isobaric plt.plot([s3, s4], [T3, T4], 'g-', lw=4) # path from 3 to 4 is isentropic plt.plot(S41, T41, 'k-', lw=4) # and from 4 to 1 is isobaric plt.savefig('images/iawps-steam-2.png') plt.savefig('images/iawps-steam-2.svg')
[<matplotlib.lines.Line2D object at 0x0000000008350908>] [<matplotlib.lines.Line2D object at 0x00000000083358D0>] [<matplotlib.lines.Line2D object at 0x000000000835BEB8>] [<matplotlib.lines.Line2D object at 0x0000000008357160>]
8 Summary
This was an interesting exercise. On one hand, the tedium of interpolating the steam tables is gone. On the other hand, you still have to know exactly what to ask for to get an answer that is correct. The iapws interface is a little clunky, and takes some getting used to. It does not seem as robust as the Xsteam module I used in Matlab.
Copyright (C) 2013 by John Kitchin. See the License for information about copying.
|
http://kitchingroup.cheme.cmu.edu/blog/2013/02/28/Meet-the-steam-tables/
|
CC-MAIN-2020-05
|
refinedweb
| 1,018
| 61.73
|
GLAD is available
Maoni
GC ETW series –
Processing GC ETW Events Programmatically with the GLAD Library (this post)
End of last year I mentioned we wanted to provide an API for you to really investigate GC/managed memory related performance called GLAD. Well, the source finally got opened source on github. So GLAD is available. The repo is called PerfView but you actually just need the TraceEvent project (but it’s much easier to just build the whole solution then add the reference to the resulting Microsoft.Diagnostics.Tracing.TraceEvent.dll). Below is a very simple example of getting the total GC pause time for each process (that has GC pauses) and printing out this info along with the process name and pid.
using System; using System.Diagnostics.Tracing; using Microsoft.Diagnostics.Tracing.Session; using Microsoft.Diagnostics.Tracing.Parsers; using Microsoft.Diagnostics.Tracing.Analysis; using Microsoft.Diagnostics.Tracing.Analysis.JIT; using Microsoft.Diagnostics.Tracing.Analysis.GC; using Microsoft.Diagnostics.Tracing.Parsers.Clr; using System.Collections.Generic; namespace GCInfoProcessing { class Program { // Given an .etl file, print out GC stats. static void DecodeEtl(string strName) { using (var source = new Microsoft.Diagnostics.Tracing.ETWTraceEventSource(strName)) { Console.WriteLine("{0}", strName); source.NeedLoadedDotNetRuntimes(); source.Process(); List GCs = null; foreach (var proc in source.Processes()) { var mang = proc.LoadedDotNetRuntime(); if (mang == null) continue; int total_gcs = 0; double total_pause_ms = 0; // This is the list of GCs with processed info GCs = mang.GC.GCs; for (int i = 0; i < GCs.Count; i++) { TraceGC gc = GCs[i]; total_gcs++; total_pause_ms += gc.PauseDurationMSec; } if (total_gcs > 0) Console.WriteLine("process {0} ({1}): total {2} GCs, pause {3:n3}ms", proc.Name, proc.ProcessID, total_gcs, total_pause_ms); } } } static void Main(string[] args) { DecodeEtl(args[0]); } } }
I’ll give a brief description of how things work for GLAD but with the code publicly available it should be fairly easy to just build and step through the code to see how it works.
TraceEvent\Computers\TraceManagedProcess.cs processes the GC ETW events and generates the info available in the TraceGC class (I edited the comments so they don’t cause trouble for html):
public class TraceGC { // Primary GC information // Set in GCStart (starts at 1, unique for process) public int Number; // Type of the GC, eg. NonConcurrent, Background or Foreground // Set in GCStart public GCType Type; /// Reason for the GC, eg. exhausted small heap, etc. // Set in GCStart public GCReason Reason; /// Generation of the heap collected. If you compare Generation at the start and stop GC events they may differ. // Set in GCStop(Generation 0, 1 or 2) public int Generation; /// Time relative to the start of the trace. Useful for ordering // Set in Start, does not include suspension. public double StartRelativeMSec; /// Duration of the GC, excluding the suspension time // Set in Stop This is JUST the GC time (not including suspension) That is Stop-Start. public double DurationMSec; /// Duration the EE suspended the process // Total time EE is suspended (can be less than GC time for background) public double PauseDurationMSec; //...... }
You will see a bunch of fields with this comment:
[Obsolete("This is experimental, you should not use it yet for non-experimental purposes.")]
It’s not obsolete – it’s just experimental. We wanted to organize the info available in TraceGC in a user friendly way (and you are welcome to suggest/contribute to it!) and we want your input to really flesh out the API aspect. Do we just want to expose these as is, or want to have a more advanced class to represent info that’s less frequently used? It’d be great to hear some opinions on this. Feel free to either leave them as comments here or on the github repo.
The process part is done at the beginning of this file. An example is
source.Clr.GCStart += delegate (GCStartTraceData data) { // .... };
If you want to see examples of how TraceGC is used, you can get plenty of such examples in PerfView\GcStats.cs – this is what generates the GCStats view in PerfView.
Looking forward to seeing the analysis that folks write on memory analysis for .NET 🙂
Edited on 03/01/2020 to add the indices to GC ETW Event blog entries
|
https://devblogs.microsoft.com/dotnet/556-2/
|
CC-MAIN-2020-16
|
refinedweb
| 691
| 50.02
|
I have done a tracert and after the kinect server I have seen a internal ip I have not seen before. Does anyone know more about it. it's an internal ip.
10.55.85.80.
Is this of any concern?
Kind Regards
For Free Games, Geekiness and Reviews, visit :
Home Of The Overrated Raccoons
Michael Murphy |
Want to be with an epic ISP? Want $20 to join them too? Well, use this link to sign up to BigPipe!
The Router Guide | Community UniFi Cloud Controller | Ubiquiti Edgerouter Tutorial
Tracing route to twitch.tv [192.16.71.165]
over a maximum of 30 hops:
1 <1 ms <1 ms <1 ms pfsense.lanrouter [10.3.57.1]
2 30 ms 25 ms 24 ms 218-101-115-254.dsl.dyn.ihug.co.nz [218.101.115.254]
3 30 ms 35 ms 23 ms bvi-400.bgnzldv02.akl.vf.net.nz [203.109.180.242]
4 23 ms 23 ms 25 ms bvi-188.bgnzldv02.akl.vf.net.nz [203.109.180.241]
5 51 ms 50 ms 50 ms 10.123.80.58
6 * * * Request timed out.
#include <std_disclaimer>
Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have.
CcMaN:RunningMan: Post the traceroute.
AFAIK Trustpower use CG-NAT, so you'll see internal IPs after your router.
I'm pretty certain TPW don't use CGNAT...
Services are provided with a dynamic IP address or Carrier Grade NAT.
|
https://www.geekzone.co.nz/forums.asp?forumid=49&topicid=186912
|
CC-MAIN-2018-05
|
refinedweb
| 251
| 77.64
|
Multiple Instances
This has been deprecated and is no longer supported.
Home Assistant supports running multiple synchronised instances using a master-slave model. Whenever
events.fire or
states.set is called on the slave it will forward it to the master. The master will replicate all events and changed states to its slaves.
Overview of the Home Assistant architecture for multiple devices.
A slave instance can be started with the following code and has the same support for components as a master instance.
import homeassistant.remote as remote import homeassistant.bootstrap as bootstrap # Location of the Master API: host, password, port. # Password and port are optional. remote_api = remote.API("127.0.0.1", "password", 8124) # Initialize slave hass = remote.HomeAssistant(remote_api) # To add an interface to the slave on localhost:8123 bootstrap.setup_component(hass, 'frontend') hass.start() hass.block_till_stopped()
Because each slave maintains its own Service Registry it is possible to have multiple slaves respond to one service call.
|
https://home-assistant.io/developers/multiple_instances/
|
CC-MAIN-2018-05
|
refinedweb
| 159
| 52.76
|
Is data provided by third parties under a CC-BY-SA 3.0 license suitable to inclusion into OSM?
The import catalog ( ) lists several cases where CC-licensed data has been imported, but I want to double check.
asked
08 Oct '13, 15:42
Augusto S
96●3●4●8
accept rate:
0%
edited
08 Oct '13, 16:18
Vclaw
9.2k●8●93●140
No, it isn't.
Since OSM has moved to a combination of the Open Database Licence and our own Contributor Terms, we now provide data under a licence which is (in some ways) more permissive than CC-BY-SA. Therefore CC-BY-SA data can't be imported, because it doesn't grant these permissions.
However, you can always approach the data owners, and ask for their permission to import it into OSM.
Don't forget you must follow the Import Guidelines, including discussing your edits with the imports@ mailing list and your local mailing list before starting the import.
answered
08 Oct '13, 17:49
Richard ♦
30.0k●43●267●401
accept rate:
19%
I see... But CC-BY without share-alike requirement would be suitable for an import, right? (Lots of imports from the catalog consist of CC-BY-licensed data.)
The attribution requirements on CC BY 2.0 and 3.0 differ from the ODbL requirements and to import a CC BY source you need to verify that they are happy with the attribution the ODbL guarantees.
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
This is the support site for OpenStreetMap.
Question tags:
import ×184
license ×150
question asked: 08 Oct '13, 15:42
question was seen: 3,410 times
last updated: 11 Oct '13, 12:42
Can I import CC-BY-SA data into OpenStreetMap?
Contributor terms and import of ODBL-licensed data
Import data from Esri Community Maps AOIs
How do I import map data from a .dwg file to OpenStreetMap?
Publishing an OpenStreetMap
How should I go about importing a city?
What effects will the license change have?
[closed] Commercial use in an application - licenceable?
Can I contribute the same data to non-OSM mapping projects?
Uploading a GPX from TTGpsLogger doesn't work
First time here? Check out the FAQ!
|
https://help.openstreetmap.org/questions/27017/importing-data-provided-under-cc-by-sa-license
|
CC-MAIN-2021-21
|
refinedweb
| 389
| 66.33
|
I wonder if there is some way to register a snippet in ST2, that would run a specified plug-in instead of embedding a static text? E.g. when commenting some piece of code, with intention to generate doxygen/phpdoc documentation later, I would like to analyze the declaration of a function I'm commenting on. For example the snippet is right before the piece of code:
function myfunc($a, $b)
I imagine I could write a plugin, that would expand it
Generally, the plugin would have to implement simple syntax analysis for a ton of languages. If there is a way to do what I just described, I would probably take up the work to provide snippets for as much languages as I could, since I think it would benefit a bunch of people.There is already a related question, but it never had an answer.
It's not possible to do it in this manner, although you can have a plugin generate a snippet on the fly, e.g.,
self.view.run_command("insert_snippet", {"contents": "Hello $1 and $0"})
I happened to find some code within the HTML/html_completions.py plugin, which did exactly what I asked for -- dynamic completionHowever, I do have some questions, if someone happens to know an answer and minds to answer:
return (u'/**', snippet)]
-- the first part doesn't seem to affect pretty much anything
The code is as follows (please don't mock me yet):
import sublime, sublime_plugin
import re
class CodeDoc(sublime_plugin.EventListener):
def on_query_completions(self, view, prefix, locations):
# only complete single line/selection
if len(locations) != 1:
return ]
# todo check is supported type of file (php)
line = view.substr(sublime.Region(view.line(locations[0]).a, locations[0]))
rex = re.compile("^\s*(.+)")
m = rex.match(line)
if not m:
return ]
if m.group(1) != '/**':
return ]
# find end of completion line
currLineEnd = view.find('\n\r]', locations[0])
if currLineEnd is None:
return ]
# find end of function/class declaration (php delimiter)
nextLineEnd = view.find('{]', currLineEnd.end())
if nextLineEnd is None:
return ]
declaration = view.substr(sublime.Region(currLineEnd.end(), nextLineEnd.begin()))
if declaration.find("function") > -1:
snippet = self.expandPhpFunction(declaration)
if snippet:
return (u'/**', snippet)]
elif declaration.find("class") > -1:
snippet = self.expandPhpClass(declaration)
if snippet:
return (u'/**', snippet)]
return ]
def expandPhpClass(self, declaration):
snippet = '/**\n'
snippet += ' * ${1}\n'
snippet += ' * @package ${2:default}\n'
snippet += ' */'
return snippet
def expandPhpFunction(self, declaration):
rex = re.compile("\((.*)\)", re.DOTALL)
m = rex.search(declaration)
if not m:
return None
params = m.group(1).split(',')
snippet = '/**\n * ${1:Description}\n'
i = 2
for p in params:
p2 = p.find('=')
if p2 > -1:
p = p[0 : p2]
p = p.strip()
p = p.replace('$', '\$')
p = p.replace('&', '')
if p == '':
continue
snippet += ' * @param ${' + str(i) + ':type} ' + p + ' ${' + str(i+1) + '}\n'
i += 2
snippet += ' * @return ${' + str(i) + ':type}\n'
snippet += ' */'
return snippet
Currently, I try to expand "/**" to a phpdoc block for a php class/function. If anybody has any suggestions on the code, I'd appreciate them a lot (since I have pretty much no knowledge in python, but I try to learn as I go)
It is fairly easy to determine the current language and do something specific based on that language.
from os.path import basename
#list supported syntax
syntax_list = {
"PHP" : #syntax specific stuff here,
"C++" : #syntax specific stuff here
}
#get current syntax
syntax = basename(self.view.settings().get('syntax')).replace('.tmLanguage','')
#is current syntax in your list
for item in syntax_list:
if item == syntax:
#Get syntax specifc stuff
syntax_specific_stuff = syntax_list[item]
This code is not tested, but the basic idea is to keep a list with the different syntax as a key.You get the current syntax and see if there is a key for it in the list, if it is in the list you can retrieve some syntax specific flag or or whatever: function call, flag, text.
You could do things completely different depending on what you need to do, but you can use the one line to retrieve syntax and do whatever you need to do with it.
Keep in mind that I am not really a python developer, nor do I understand the most pythonic ways to do things. I am just a guy who picks up languages as I need them.
Hey, i actually just wrote a plugin which does this, and wrote about it on the Plugin Announcment forum about 5 minutes ago
Currently, it's geared to Javascript, but it shouldn't take too much effort to expand to give better support for PHP, C++, ... It'd be great to read your feedback
I've seen it probably just as you've published it on github
Looks and feels great, I'll try to merge the code I wrote for myself into it, since yours looks fancier. The only thing I don't like, though, is that "/**" triggers a documenting comment regardless of wether I want it or not — I think it should pop up in as autocompletion option.
That could work, along with perhaps a configuration option? Fork the repo and send me a pull request
|
https://forum.sublimetext.com/t/context-sensitive-snippet/2628
|
CC-MAIN-2016-18
|
refinedweb
| 847
| 51.99
|
not require knowledge of JDBC, EJB, Hibernate, or Spring interfaces.
Use a Data..., exam in 3 days, and just now i found out our lecturer post a demo on DAO... ,and you can add, edit employee in that table.
In eclipse workspace, she
DAO in Struts
DAO in Struts Can Roseindia provide a simple tutorial for implementation of
DAO with struts 1.2?
I a link already exits plz do tell me.
Thank you
An introduction to spring framework
.
Just as Hibernate
attacks CMP as primitive ORM technology, Spring attacks... programming to Spring.
Using this we can add annotation to the source code that instructs Spring
on where and how to apply aspects.
4. Spring DAO:
The Spring's JDBC
Java Data Layer - Framework
Java Data Layer how does Ojb, JDO, EJB 3.0-Entity Bean, JPA, OpenJPA and Hibernate are differ from each other?
Hi friend,
For more information on JPA/Hibernate visit Projects
.
Understanding
Spring Struts Hibernate DAO Layer... by
combining all the three mentioned frameworks e.g. Struts, Hibernate and
Spring... that can be used later in any big Struts Hibernate and
Spring basedate
Struts Flow
Struts Flow can u explain about struts flow with clear explaination Hello,
Please visit the following link:
Integrating MyFaces , Spring and Hibernate
Capabilities
In this section we will add Spring and Hibernate Capabilities to our web
application. So for this, we will have to add the spring and hibernate...Integrating MyFaces , Spring and Hibernate
struts flow
struts flow what is the diff between perform() and execute() in struts?
what is the diff between DispatchAction() and LoojupDispatchAction() in struts
spring hibernate
spring hibernate I need to save registration details in a database table through jsp using spring an hibernate....and the fields in the registration... that have same flow as needed in my application???
Please visit
spring with hibernate - Spring
the following link: with hibernate Hi,
I need the sample code for user registration using spring , hibernate with my sql.
Please send the code as soon
Struts Flow In Depth
Struts Flow In Depth Struts Flow In Depth
Struts Articles
is isolated from the user).
Bridge the gap between Struts and Hibernate
Hibernate and Struts are currently among the most popular open.... This article identifies some of the gaps between Struts and Hibernate, particularly
What is Spring?
:
The JDBC abstraction layer of the Spring offers a
meaningful exception...;
Integration with Hibernate, JDO, and iBATIS: Spring provides best
Integration services with Hibernate, JDO and iBATIS.
AOP Framework: Spring is best
About Hibernate, Spring and JSF Integration Tutorial
tier. Spring integrates Hibernate very well.
Flow of the application...About Hibernate, Spring and JSF Integration Tutorial
... explains the integration of Hibernate and
Spring Frameworks into JSF (MyFaces Hibernate Spring - Login and User Registration - Hibernate
Struts Hibernate Spring - Login and User Registration Hi All,
I fallowed instructions that was given in Struts Hibernate Spring, Login and user....
Struts control data flow
Struts control data flow How Struts control data flow
plese tell -Struts or Spring - Spring
/hibernate/index.shtml... about spring.
which frameork i should do Struts or Spring and which version.
i...,
You need to study both struts and spring.
Please visit the following links
Hibernate code - Hibernate
Hibernate code Hi
How to intract struts , spring and Hibernet... the following link:
Here you will get application comprises of Struts,Spring and Hibernate.Hope
DAO Example
DAO Example Dear Friends
Could any one please give me any example of DAO application in struts?
Thanks & Regards
Rajesh
Spring with Hibernate - Spring
Spring with Hibernate When Iam Executing my Spring ORM module (Spring with Hibernate), The following messages is displaying on the browser window...)
javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
note The full stack trace of the root
struts flow - Struts
struts flow Struts flow Hi Friend,
Please visit the following links:
Thanks
java struts DAO - Struts
java struts DAO hai friends i have some doubt regarding the how to connect strutsDAO and action dispatch class please provide some example to explain this connectivity.
THANKS IN ADVANCE... on the Spring classes. While some of the code of
the integration layer such as data access...Introduction to spring 3
In this section we will discuss about Spring 3
Spring Web, Spring Web Modules, Spring Web Example
layer.
The Spring Web modules allows the developers to develop/manage web...
easier.
The Spring Web Layer contains following modules:
Web....
With the help of many examples we will understand the Spring Web layer
modules in details... Core
Spring Context
Spring DAO
Spring ORM
Spring AOP...Why to use Spring Framework in a web application?
In this article 4 MVC Hello World Example: Spring 4 MVC Tutorial will full source code
, import project in Eclipse, Add Spring dependencies and finally add the
code... in Eclipse
Add the Spring dependencies
Add the JSP and Java files... will add the Spring dependencies in the
pom.xml file
Step 13: Spring
JSF+SPRING+HIBERNATE - AOP
JSF+SPRING+HIBERNATE any form builder is available from database table to UI FORM LIKE list ,add,edit,delete and search
Adding Spring and Hibernate Capabilities
Spring and Hibernate
Capabilities to our web application. So for this, we will have to add the spring and
hibernate libraries and configuration files to the web...Adding Spring and Hibernate Capabilities
code problem - Hibernate
using hibernate in spring.
I've used Spring dependency injection too.I struck at DAO(data access Object)layer while executing the select statement in HQL...);
}}
----------------------DAO layer-------------------------------------
public class
Spring Architecture
for object-relational mapping APIs, including JDO, Hibernate and iBatis.
Spring Web: is part of web application development stack.
Spring DAO:- (Data Access Object) standardizes the data access work using JDBC, Hibernate or JDO. Book
the mapping for you.
Not only that, Hibernate makes it easy. Positioned as a layer...
Positioned as a layer between the application and the database, Hibernate... and frameworks like XDoclet, Struts, Webwork, Spring, and Tapestry.
Pro
The Complete Spring Tutorial
The Complete Spring Tutorial
In this tutorial I will
show you how you can integrate struts, spring and hibernate in your web... services,
Schedulers, Ajax, Struts, JSF and many other frameworks. The Spring MVC
What is the general flow of Hibernate communication with RDBMS?
What is the general flow of Hibernate communication with RDBMS? Hi,
What is the general flow of Hibernate communication with RDBMS?
thanks
struts - Framework
Struts Spring Hibernate Integration struts spring hibernate integration tutorial
Hibernate required jar files - Hibernate
,
Please visit the following link:
Download the full code from the above link and extract... to spring-hibernate integration.
Thanks
why to use hibernet as a dataacces layer
why to use hibernet as a dataacces layer plz give me the reply
net.roseindia.dao
This package is the part of Spring and Hibernate JPA integration application which contains DAO service interface and it's implementation class
Integrating JSF Spring and Hibernate
Integrating JSF Spring and Hibernate
Integrating JSF, Spring and Hibernate This article explains integrating JSF (MyFaces), Spring and Hibernate to build real
JSF,Integrating Presentation Layer
JSF,Integrating Presentation Layer
... about configuring the
presentation layer.
The presentation tier integration... ServiceFinder class to get the Spring managed bean.
FacesContext context
Java - Spring
Java Hi Roseindia,
Can you provide code for searching based on Id and name using Spring,Hibernate & Struts.
MAny Thanks
Ragahvendra
.../struts/hibernate-spring/index.shtml
Hope that it will be helpful for you
SPRING ... A JUMP START
; spring-dao.jar
: It contains DAO support and transaction infrastructure.
7. ...-hibernate.jar : It contains Hibernate 2.1 support, Hibernate
3.x support.
14. spring...
SPRING ... A JUMP START
-----------------------
by Farihah Noushene
User Registration Action Class and DAO code
);
Configurations to be made into struts-config.xml
a) Add the form bean...
User Registration Action Class and DAO code... is the full code of UserRegisterAction.java:
package
Spring Framework Training
;
Spring Web Flow
...
Spring Framework Training
Spring
Framework Training Course Objectives
Hibernate Search - Complete tutorial on Hibernate Search
Hibernate Search - Complete tutorial on Hibernate Search
Hibernate Search is the latest extension of Hibernate, which can be used to
add the full text search capabilities
Integrated Struts 2 Hibernate and JPA Training
Integration layer - Hibernate, Spring, JPA
* Developing the database...
Integrated Struts 2 Hibernate and JPA Training
... the complex web and enterprise applications with Struts 2,
Hibernate and JPA frameworks
insert code using spring - Spring
://
Hope that it will be helpful...insert code using spring Hi,
Can any one send code for inserting the data in database using spring and hibernate ..
Many thanks
Spring Context Loader Servlet
-app>
If you don't add the above code the Spring...
Spring Context Loader Servlet
In this section we will learn about Spring's Context
Plz Help Me
Plz Help Me Write a program for traffic light tool to manage time... to the main-street for another 40 seconds, if yes, and the sub-street was full...);
p2.add(b3);
getContentPane().add(p1);
getContentPane().add(p2
spring
spring sir can you explain me the flow of sample example by using spring? MVC Tutorials
.
Spring 3 MVC and Hibernate 3 Example - Learn how to
Integrate Hibernate with your Spring MVC framework. This
tutorial...-
Spring 3 MVC and Hibernate 3 Example Part 1,
Spring 3 MVC
DAO,DTO,VO Design patterns
DAO,DTO,VO Design patterns explain dao,dto,vo design patterns in strut 1.3?
Data Access Object (DAO) pattern is the most popular design... the huge data transfering from one layer to another layer which can
Data Access object (DAO) Design Pattern
Layer has proven good in separate business logic layer and
persistent layer. The DAO design pattern completely hides the data access
implementation... the DAO to adopt different access scheme without affecting to business
logic or its
How to add a bean in spring application?
How to add a bean in spring application? Hi,
How to add a bean in spring application?
Thanks
struts hibernate integration application
to show how to Integrate Struts
Hibernate and create an application. This struts... also use annotation
as mapping metadata.
About struts hibernate integration... of the struts hibernate integration
application example.
Description
full description of program code
.
i've a problem that plz describe me full description of this code
thanks...full description of program code import java.util.*;
class StringExample6
{
public static void main(String[] args)
{
Scanner input
Struts Tutorial
handles the
request and communicate with the model layer.
Versions Of Struts...
Struts is capable to integrate with the other frameworks like, Hibernate,
Spring, JSF.
Struts And Validation
Validation is the main feature of any web
full description of program code
full description of program code escribe me below code how this program code is make and work, and those method is use in this why?
import... is created, that will add the list elements using add() method. The method read() reads
dao
|
http://www.roseindia.net/tutorialhelp/comment/96028
|
CC-MAIN-2014-52
|
refinedweb
| 1,798
| 56.76
|
Universal Links
Before iOS 9 — the mechanism to open apps which were installed was to open Safari, try to open a deeplink and use a timer fallback to the App Store.
On iOS 9 Apple announced “Universal Links” where instead of opening Safari first, the iOS checks if the Universal Link is registered in the domain associated with the link, then checks if the corresponding app is installed. The app is opened if installed, and if not, Safari that iOS can establish a trust between them.
Deeplinking – How it Works:
When the app is installed or updated, iOS will check which websites are acceptable by this app as Universal Links. Then it will check each of these websites to verify whether the app is registered there. This is done by placing a file with the app IDs on the website and a list of domains in the app.
Preparing Your Website:
- Create a apple-app-site-association file. Note that there’s no .json file type
- Place the apple-app-site-association file and identify app IDs and paths on your website. You can define several apps in this file and iOS will follow the app order while looking for a match so that file must be present and its value must be an empty array.
The value of the “appID” key is the team ID and the bundle ID joined with a period. The team ID appears on Organization Profile > Account Summary on Apple’s Developer Portal. When you enter the Member Center on, click your name on the top right and select “View Account”:
The Team ID appears on the Developer Account Summary section:
3. Upload the apple-app-site-association file in the root of your HTTPS web server. The file should be accessible without redirects at http://<domain>/apple-app-site-association
Preparing your App
Creating provisioning profile
1. Assuming your app has been.
Handling when app is opened by Universal Link
To be able to subscribe for a native IOS App events you need to enable App Events API.
- Open IOS Native Editor Settings. Window -> Stan's Assets -> IOS Native -> Edit Settings
- Enable App Events API as showed on a screenshot bellow
Since IOS APP controllers (App Delegate) are started before Unity player, you need to check if an app was launched by a Universal Link. See the code snippet below:
namespace SA.iOS.UIKit ... string url = ISN_UIApplication.ApplicationDelegate.GetLaunchUniversalLink(); if(!string.IsNullOrEmpty(url)) { Debug.Log("ALaunch Universal Link Detecetd: " + url); }
To handle app restored from a background with Universal Link you need to subscribe to OnContinueUserActivity action.
namespace SA.iOS.UIKit ... ISN_UIApplication.ApplicationDelegate.ContinueUserActivity.AddListener((string url) => { Debug.Log("Universal Link Detecetd: " + url); });
Testing
Now that your app and website are set up,.
|
https://unionassets.com/ios-native-pro/universal-links-643
|
CC-MAIN-2018-47
|
refinedweb
| 459
| 53.51
|
If you depending on a external source to return static data you can implement
cachetools to cache data from preventing the overhead to make the request everytime you make a request to Flask.
This is useful when your upstream data does not change often. This is configurable with
maxsize and
ttl so whenever the first one's threshold is met, the application will fetch new data whenever the request has been made to your application.
Example
Let's build a basic flask application that will return the data from our
data.txt file to the client:
from flask import Flask from cachetools import cached, TTLCache app = Flask(__name__) cache = TTLCache(maxsize=100, ttl=60) @cached(cache) def read_data(): data = open('data.txt', 'r').read() return data @app.route('/') def main(): get_data = read_data() return get_data if __name__ == '__main__': app.run()
Create the local file with some data:
$ touch data.txt $ echo "version1" > data.txt
Start the server:
$ python app.py
Make the request:
$ curl version1
Change the data inside the file:
$ echo "version2" > data.txt
Make the request again:
$ curl version1
As the ttl is set to 60, wait for 60 seconds so that the item kan expire from the cache and try again:
$ curl version2
As you can see the cache expired and a new request has been made to read the file again and load it in cache, and then return to the client.
Thank You
Please feel free to show support by, sharing this post, making a donation, subscribing or reach out to me if you want me to demo and write up on any specific tech topic.
|
https://sysadmins.co.za/how-to-cache-data-with-python-flask/
|
CC-MAIN-2019-22
|
refinedweb
| 270
| 68.6
|
Message-ID: <562477116.849.1432300945602.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_848_873235302.1432300945602" ------=_Part_848_873235302.1432300945602 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
This is actually relatively simple; we only need a few of the Bo= o.Lang.Compiler classes to make it happen.=20
The classes most important to us are:=20
Using these classes and reflection, we will be able compile Boo scripts = to memory and run'em on inputted variables.=20
Suppose script.boo is a class that the user has written, and you want to= consume it by your application without the user having to dance with booc.= exe or some other fancy pants method.=20
static def stringManip(item as string): //static lets us invoke this method= without needing to instanize a class. =09return "'${item}'? What the hell are you talking about?"=20
Calling scripting Boo from Boo is ridiculously easy--too easy to even ex= plain, so here is the commented code.=20
import System import Boo.Lang.Compiler import Boo.Lang.Compiler.IO import Boo.Lang.Compiler.Pipelines booC =3D BooCompiler() booC.Parameters.Input.Add( FileInput("script.boo") ) booC.Parameters.Pipeline =3D CompileToMemory() //No need for an on-disk fil= e. booC.Parameters.Ducky =3D true //By default, all objects will be ducked typ= ed; no need for the user to "var as string" anywhere. context =3D booC.Run() //The main module name is always filename+Module in pascal case; //this file is actually RunBooModule! //Using duck-typing, we can directly invoke static methods //Without having to do the typical System.Reflection voodoo. if context.GeneratedAssembly is null: =09print join(e for e in context.Errors, "\n") else: =09var as duck =3D context.GeneratedAssembly.GetType("ScriptModule&quo= t;) =09print var.stringManip("Dance Dance Revolution") =09print var.stringManip("Techno music gives me hives.")=20
That was painless, wasn't it? As you're imaging, you can instanize class= es and call instance methods via the convience of duck typing in a similar = fashion. Consider the code-block below.=20
myClass =3D context.GeneratedAssembly.GetType("SomeClass", true, = true) //Here, we need the type. myInstance as duck =3D myClass() //Create an instanization of this type! myInstance.myMethod("Happy happy joy joy!")=20
That's it. Use myInstance as you would any other class, sans cod=
e completion
This method, of course, is not as interesting as using another language =
entirely to invoke our Boo script.
Let's see how we do it in C#:
using System; using System.Text; using System.Reflection; using Boo.Lang.Compiler; using Boo.Lang.Compiler.IO; using Boo.Lang.Compiler.Pipelines; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { BooCompiler compiler =3D new BooCompiler(); compiler.Parameters.Input.Add(new FileInput("script.boo&qu= ot;)); compiler.Parameters.Pipeline =3D new CompileToMemory(); compiler.Parameters.Ducky =3D true; CompilerContext context =3D compiler.Run(); //Note that the following code might throw an error if the Boo = script had bugs. //Poke context.Errors to make sure. if (context.GeneratedAssembly !=3D null) { Type scriptModule =3D context.GeneratedAssembly.GetType(&qu= ot;ScriptModule"); MethodInfo stringManip =3D scriptModule.GetMethod("str= ingManip"); string output =3D (string)stringManip.Invoke(null, new obje= ct[] { "Tag" } ); Console.WriteLine(output); } else { foreach (CompilerError error in context.Errors) Console.WriteLine(error); } } } }=20
I compiled the above runBoo.cs script with this command (you can change = csc to gmcs if you use mono):=20
csc /r:Boo.Lang.Compiler.dll runBoo.cs=20
If you are on Windows, make sure you specify the paths to the boo dlls. = After compiling, either move runBoo.exe into the same folder as the boo dll= s or copy the dlls to the same folder as your exe. Then run the exe, making= sure "script.boo" is in the same folder as your current director= y:=20
runBoo.exe=20
As you can see the C# variant is a bit more verbose, but that's your bag= , let it roll.=20
Here are the highlights of the C# version.=20
Type scriptModule =3D context.GeneratedAssembly.GetType("ScriptModule&= quot;);=20
ScriptModule is the name of the class encapsulating the main method and = the stringManip method of the compiled "script.boo" It is a uniqu= ely generated name determined by the file-name plus a "Module" po= stfix.=20
string output =3D (string)stringManip.Invoke(null, new object[] { "Tag= " } );=20
This block of code invokes the stringManip method with one parameter: &q= uot;Tag." The first parameter, which is normally an instance of a clas= s, is null because the stringManip method is static and thus requires no in= stance method of ScriptModule.=20
It returns back the output of the stringManip method.
|
http://docs.codehaus.org/exportword?pageId=27473
|
CC-MAIN-2015-22
|
refinedweb
| 791
| 53.17
|
This small project allows you to decode DDEX files into friendly Python data types.
Project description
This project allows you to read DDEX files into friendly Python data types. XML files are decoded using the PyXB library.
Keep in mind that this is a fairly low level library that only aims at making DDEX files easier to read using Python. Some DDEX data structures expose lists containing only one element, and some value like UpdateIndicator are not cast as booleans.
- Free software: MIT license
- Documentation:.
- Repository:
Features
- Open an XML file into a DDEX data structure generated by pyxb corresponding to the DDEX version.
- Parse this DDEX data structure into a Python dict.
Supported DDEX versions
- 3.1.2
- 3.2 (untested)
- 3.3
- 3.4
- 3.4.1
- 3.5
- 3.5.1
- 3.6
Version 3.7 is causing issues with PyXB.
Quickstart
from ddexreader import open_ddex, ddex_to_dict xml_path = '/path/to/my/ddex_file.xml' ddex = open_ddex(xml_path) ddex_dict = ddex_to_dict(ddex)
How to add more DDEX definitions
After installing pyxb on your (unix) system, enter:
pyxbgen -u [the url to the definition file]
History
0.1.1 (2015-09-14)
- Added support for ERN 3.1.2
0.1.0 (2015-01-11)
- First release on PyPI.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/ddexreader/
|
CC-MAIN-2021-39
|
refinedweb
| 237
| 59.7
|
If you didn't know it already, now is a pretty exciting time for unit testing
in .NET. Tremendous progress is being made on several fronts: IDE integration,
process integration, and new test fixtures. This article will cover unit testing
in Visual Studio 2005, including VSTS unit testing, NUnit and MBUnit--the Superman of unit testing.
NUnit is the unit testing framework that has the majority of the market share.
It was one of the first unit testing frameworks for the .NET platform. It utilizes
attributes to identify what a test is. The TestFixture attribute
is used to identify a class that will expose test methods. The Test
attribute is used to identify a method that will exercise a test subject. Let's
get down to business and look at some code.
TestFixture
Test
First we need something to test:
public class Subject {
public Int32 Add(Int32 x, Int32 y)
{
return x + y;
}
}
That Subject class has one method: Add. We will test the Subject class by
exercising the Add method with different arguments.
Subject
Add
[TestFixture]
public class tSubject
{
[Test]
public void tAdd()
{
Int32 Sum;
Subject Subject = new Subject();
Sum = Subject.Add(1,2);
Assert.AreEqual(3, Sum);
}
}
The class tSubject is decorated with the attribute TestFixture, and the method
tAdd is decorated with the attribute Test. You can compile this and run it in
the NUnit GUI application. It will produce a successful test run.
tSubject
tAdd
That is the basics of what NUnit offers. There are attributes to help with
setting up and tearing down your test environment: SetUp, SetUpFixture,
TearDown, and TearDownFixture. SetUpFixture is run once at the beginning when the
fixture is first created; similarly, TearDownFixture is run once after all tests
have completed. SetUp and TearDown are run before and after each test.
SetUp
SetUpFixture
TearDown
TearDownFixture
NUnit tests can be run several different ways: from the GUI application, from
the console's application, and from a NAnt task. NUnit has been integrated into
Cruise Control .NET as well. In the last product review, you will see how it has
been integrated into the VS.NET IDE as well.
Figure 1. NUnit GUI Application
Related Reading
Visual Studio Hacks
Tips & Tools for Turbocharging the IDE
By James A.
|
http://archive.oreilly.com/pub/a/dotnet/2005/07/18/unittesting_2005.html
|
CC-MAIN-2016-22
|
refinedweb
| 374
| 65.22
|
I am facing problem in using cin.fail() function!! The problem is that when I want to check that is user enter correct input or not I am using a do-while loop that force the user to enter a valid number but it didn't works!!! the code is here:
#include <iostream> #include <conio.h> using namespace std; int main() { int no; do { cout << "Enter a number: "; cin >> no; if (cin.fail()) { cout << "Please Enter a valid Integer!!"; } } while(!cin.fail()); cin >> no; return 0; }
But when I enter an integer it again shows enter a number[b/] but when I enter a character it works fine and terminate showing the message Please Enter a valid Integer but I want that if user enter a character then it force the user to enter a no otherwise not.
Also when I remove [b]! (negation) from while then this code doesn't work properly i.e., when I am enter a no it do nothing but when I will enter a character an infinite loop is execute.
how to get rid from this problem!!!!!
|
http://www.dreamincode.net/forums/topic/284352-cinfail-problem/
|
CC-MAIN-2016-40
|
refinedweb
| 183
| 70.84
|
The previous chapters detailed the classes and methods available to the developer at the so-called ORM level. However they say little about the common patterns of usage of these objects.
Entities objects (and their adapters) are used in the repository and web sides of CubicWeb. On the repository side of things, one should manipulate them in Hooks and Operations.
Hooks and Operations provide support for the implementation of rules such as computed attributes, coherency invariants, etc (they play the same role as database triggers, but in a way that is independent of the actual data sources).
So a lot of an application’s business rules will be written in Hooks (or Operations).
On the web side, views also typically operate using entity objects. Obvious entity methods for use in views are the Dublin Core methods like dc_title. For separation of concerns reasons, one should ensure no ui logic pervades the entities level, and also no business logic should creep into the views.
In the duration of a transaction, entities objects can be instantiated many times, in views and hooks, even for the same database entity. For instance, in a classic CubicWeb deployment setup, the repository and the web front-end are separated process communicating over the wire. There is no way state can be shared between these processes (there is a specific API for that). Hence, it is not possible to use entity objects as messengers between these components of an application. It means that an attribute set as in obj.x = 42, whether or not x is actually an entity schema attribute, has a short life span, limited to the hook, operation or view within which the object was built.
Setting an attribute or relation value can be done in the context of a Hook/Operation, using the obj.cw_set(x=42) notation or a plain RQL SET expression.
In views, it would be preferable to encapsulate the necessary logic in a method of an adapter for the concerned entity class(es). But of course, this advice is also reasonable for Hooks/Operations, though the separation of concerns here is less stringent than in the case of views.
This leads to the practical role of objects adapters: it’s where an important part of the application logic lies (the other part being located in the Hook/Operations).
We can look now at a real life example coming from the tracker cube. Let us begin to study the entities/project.py content.
from cubicweb.entities.adapters import ITreeAdapter class ProjectAdapter(ITreeAdapter): __select__ = is_instance('Project') tree_relation = 'subproject_of' class Project(AnyEntity): __regid__ = 'Project' fetch_attrs, cw_fetch_order = fetch_config(('name', 'description', 'description_format', 'summary')) TICKET_DEFAULT_STATE_RESTR = 'S name IN ("created","identified","released","scheduled")' def dc_title(self): return self.name
The fact that the Project entity type implements an ITree interface is materialized by the ProjectAdapter class (inheriting the pre-defined ITreeAdapter whose __regid__ is of course ITree), which will be selected on Project entity types because of its selector. On this adapter, we redefine the tree_relation attribute of the ITreeAdapter class.
This is typically used in views concerned with the representation of tree-like structures (CubicWeb provides several such views).
It is important that the views themselves try not to implement this logic, not only because such views would be hardly applyable to other tree-like relations, but also because it is perfectly fine and useful to use such an interface in Hooks.
In fact, Tree nature is a property of the data model that cannot be fully and portably expressed at the level of database entities (think about the transitive closure of the child relation). This is a further argument to implement it at entity class level.
fetch_attrs configures which attributes should be pre-fetched when using ORM methods retrieving entity of this type. In a same manner, the cw_fetch_order is a class method allowing to control sort order. More on this in Loaded attributes and default sorting management.
We can observe the big TICKET_DEFAULT_STATE_RESTR is a pure application domain piece of data. There is, of course, no limitation to the amount of class attributes of this kind.
The dc_title method provides a (unicode string) value likely to be consumed by views, but note that here we do not care about output encodings. We care about providing data in the most universal format possible, because the data could be used by a web view (which would be responsible of ensuring XHTML compliance), or a console or file oriented output (which would have the necessary context about the needed byte stream encoding).
Note
The Dublin Core dc_xxx methods are not moved to an adapter as they are extremely prevalent in CubicWeb and assorted cubes and should be available for all entity types.
Let us now dig into more substantial pieces of code, continuing the Project class.
def latest_version(self, states=('published',), reverse=None): """returns the latest version(s) for the project in one of the given states. when no states specified, returns the latest published version. """ order = 'DESC' if reverse is not None: warn('reverse argument is deprecated', DeprecationWarning, stacklevel=1) if reverse: order = 'ASC' rset = self.versions_in_state(states, order, True) if rset: return rset.get_entity(0, 0) return None def versions_in_state(self, states, order='ASC', limit=False): """returns version(s) for the project in one of the given states, sorted by version number. If limit is true, limit result to one version. If reverse, versions are returned from the smallest to the greatest. """ if limit: order += ' LIMIT 1' rql = 'Any V,N ORDERBY version_sort_value(N) %s ' \ 'WHERE V num N, V in_state S, S name IN (%s), ' \ 'V version_of P, P eid %%(p)s' % (order, ','.join(repr(s) for s in states)) return self._cw.execute(rql, {'p': self.eid})
These few lines exhibit the important properties we want to outline:
|
https://docs.cubicweb.org/book/devrepo/entityclasses/application-logic.html
|
CC-MAIN-2017-13
|
refinedweb
| 966
| 52.49
|
Stefan Tilkov has several interesting remarks regarding our .NET Service Bus REST Queue Protocol that are worth addressing.
Putting a password in the URI to get an identity token seems to expose information unnecessarily
That’s an area where we know that we’re going to change the protocol. We’ve already labeled that protocol as temporary in the documentation for the PDC CTP and we didn’t get all the pieces in the .NET Access Control service together, yet. Since it’s a HTTPS call, the data doesn’t get exposed on the wire, though.
Queue creation seems fine, even though I feel a little uneasy about wrapping this in an Atom entry
Using Atom 1.0 and the Atom Publishing Protocol as the framework for managing the namespace is very intentional - for several reasons. First of all, it’s a standardized protocol for managing generic lists and the elements in those lists. With that we have a stable and accepted protocol framework and there’s plenty of tooling and framework support around it. That’s worth something. All we need to do is to add some simple extensions – the policies – on top of that stack. Beats having to define, version, and maintain a whole protocol.
On the other hand, Atom seems reasonable considering you get an Atom feed from the queue’s “parent” resource
That’s what I mean. All sorts of tools know how to navigate and display Atom.
Very nice to see the use of link/rel to get to the detailed Queue URIs; it would be even better if the rel values themselves were URIs in an MS namespace
I don’t see much potential for collision here and I would find it odd to have something as simple “self” and “alternate” and then add some unsightly QName for my rel expressions. Simple is good.
Using “alternate” for the tail seems strange
“self” refers to the location where the Atom <entry> resides. “alternate” is what the entry points to. Since the Queue gets mapped into the namespace by “sticking its tail out”, the choice of the alternate link is the simplest possible mapping I could think of.
.
The way to look at this is that the Queue’s tail is acting on behalf of the receiver/resource that’ll eventually pick up and process the messages. POST, PUT, DELETE, and BOOYAH are all operations that cleanly map to processing operations and can often be delivered asynchronously with a 202 receipt reply. GET and HEAD don’t make much sense when executed in an asynchronous fashion without getting a reply that’s backed by a response for the receiver. OPTIONS is simply reserved for future use.
.
DELETE is indeed the dequeue operation variant that you’d use if you are ok with occasional message loss and want to trade transfer reliability for fewer roundtrips.
The POST lock/delete approach, on the other hand, is very nice. Maybe it should be made idempotent, again e.g. using POE
POST lock/delete is the dequeue variant where you are doing the trade the other way around. More reliability bought with an extra roundtrip. In my view, idempotent access to individual messages isn’t much of a practical priority for a competing consumer queue. You’ll get a message or a set of messages to look at under a lock and if you walk away from the message(s), those messages pop back into the queue for someone else to look at. There are obviously scenarios where you want to look at a message sequence in other ways than an ordered queue where you can only get at messages as they appear on the head of the sequence – direct message access and per-message idempotent access matter in those scenarios and we’re looking to give you a capability of that sort in a different kind of messaging primitive.
“The Delete request is sent without any entity body and MUST have a Content-Length header that is set to zero (0)”; although my immediate reaction was to question whether DELETE ever carries a body, the HTTP spec indeed doesn’t say anything about this
One of the great things about working here is that there are all sorts of interesting people around. I’ve discussed the use of DELETE and whether you can provide an entity body in either direction with Henrik Frystyk Nielsen, who works as an architect on the WCF team and is one of the co-authors of HTTP 1.1. Henrik’s stance is that all operations allow entity-bodies unless it’s explicitly forbidden in the spec. I don’t have a better authority to talk to.
“The DELETE and POST operation have a set of options that are expressed as query parameters appended to the queue’s head URI” - the wording is worse than the actual approach.
I’m sorry that my writing is so clumsy ;-)
|
http://blogs.msdn.com/b/clemensv/archive/2009/04/06/the-net-service-bus-rest-protocol-for-queues-ndash-some-comments-some-answers.aspx
|
CC-MAIN-2014-15
|
refinedweb
| 823
| 59.03
|
Hi, I am trying to write a simple script that reads a file for two time stamps and then finds the delta. However, I can't get passed converting the time strings to datetime objects. I haven't done any coding since college and I everything I've read says this code should work. I am getting nowhere.
I am sure there are much more efficient ways to do this but right now I just want it to work.
I removed some of the code not related to the problem to make it less confusing.
from datetime import datetime # Open file f = open('Carolina_PMs.txt', 'r') # Read and ignore header lines header1 = f.readline() # Loop over file to read lines and extract times/labor hours for line in f: line = line.strip() # Split text in file into columns columns = line.split() # Split certain columns from above into more columns based on " symbol to get labor i.e. 0.5 hours splitStart = columns[8].split('"') splitEnd = columns[10].split('"') # Extract start and end times as strings, then add AM/PM extractStartTime = columns[7] extractEndTime = columns[9] startTime = extractStartTime + " " + splitStart[0] endTime = extractEndTime + " " + splitEnd[0] # Convert strings to datetime objects start = datetime.strptime(startTime, '%I:%M%p') end = datetime.strptime(endTime, '%I:%M%p') f.close()
The strings stored in startTime and endTime look like this:
print(startTime, endTime)
01:11 PM 01:41 PM
And here is the error I get:
Traceback (most recent call last): File "C:/Users/intermed/PycharmProjects/Clarify/Clarify.py", line 24, in <module> start = datetime.strptime(startTime, '%I:%M%p') File "C:\Users\intermed\AppData\Local\Programs\Python\Python36\lib\_strptime.py", line 565, in _strptime_datetime tt, fraction = _strptime(data_string, format) File "C:\Users\intermed\AppData\Local\Programs\Python\Python36\lib\_strptime.py", line 362, in _strptime (data_string, format)) ValueError: time data '01:11 PM' does not match format '%I:%M%p'
|
https://www.daniweb.com/programming/software-development/threads/507723/need-help-with-simple-script
|
CC-MAIN-2020-05
|
refinedweb
| 316
| 59.4
|
C library function - ungetc()
Description
The C library function int ungetc(int char, FILE *stream) pushes the character char (an unsigned char) onto the specified stream so that the this is available for the next read operation.
Declaration
Following is the declaration for ungetc() function.
int ungetc(int char, FILE *stream)
Parameters
char -- This is the character to be put back. This is passed as its int promotion.
stream -- This is the pointer to a FILE object that identifies an input stream.
Return Value
If successful, it returns the character that was pushed back otherwise, EOF is returned and the stream remains unchanged.
Example
The following example shows the usage of ungetc() function.
#include <stdio.h> int main () { FILE *fp; int c; char buffer [256]; fp = fopen("file.txt", "r"); if( fp == NULL ) { perror("Error in opening file"); return(-1); } while(!feof(fp)) { c = getc (fp); /* replace ! with + */ if( c == '!' ) { ungetc ('+', fp); } else { ungetc(c, fp); } fgets(buffer, 255, fp); fputs(buffer, stdout); } return(0); }
Let us assume, we have a text file file.txt, which contains the following data. This file will be used as an input for our example program:
this is tutorials point !c standard library !library functions and macros
Now, let us compile and run the above program that will produce the following result:
this is tutorials point +c standard library +library functions and macros
|
http://www.tutorialspoint.com/c_standard_library/c_function_ungetc.htm
|
CC-MAIN-2015-27
|
refinedweb
| 229
| 66.33
|
Michael Third
- Total activity 13
- Last activity
- Member since
- Following 0 users
- Followed by 0 users
- Votes 0
- Subscriptions 5
Michael Third created a post,
Problems with 209I get the following error after loading a project in build 209. I had this same problem with 208, but in 209 it goes into a loop and doesn't stop reporting errors. I think its to due to the fact ...
Michael Third created a post,
Build 209 unusableI'm unable to use build 209 because of non-stop exceptions right after parsing the referenced assemblies. It appears to be failing while loading the VBASIC language service (which I don't have ins...
Michael Third created a post,
Feature request to help out with DataBindingsI create a custom model object for each control or form and bind that to the UI control instead of using DataSets. One thing that would be a huge timesaver would be to add an option to the Generat...
Michael Third commented, Michael Third created a post,
System namespace not recognizedI've noticed several people in the past had this same problem, but no resolution was ever posted. This just started happening for me with build 78 and no uninstall/reinstall seems to fix it.Thanks...
|
https://resharper-support.jetbrains.com/hc/en-us/profiles/2135228385-Michael-Third
|
CC-MAIN-2021-31
|
refinedweb
| 207
| 57.4
|
I am trying to build a class with definitions that will output the following...
Income tax for year 2009:
Name: John Doe
Address: 1234 Alphabet Lane, City, State Zip
SSN: 111-11-1111
DOB: 11-11-1111
I have built my class and source code to define the function members. But I am running into an output problem. When I enter in the first and last name for the Name prompt in my program it skips over the Address. I am sure this is an obvious limitation in my code but because I am a noob I am recognize it. Also I am a little unsure as to how I would fix this for the Address since the address also is going to have multiple spacing contained within the string. I have posted my Personal Class, Source Code, and Main Code below. Thanks in advance.
//Personal Class Definition #include <iostream> #include <string> using namespace std; //Personal class definition class Personal { public: Personal(string, string, int, int); //constructor that initiates the Income string getName(); //void setAddress(string); string getAddress(); //void setBirthday(int); int getBirthday() const; //void setSSN(int); int getSSN() const; private: string Name; string Address; int DOB; int SSN; }; //end class Income
I took out the void functions but I also left them because I was not sure why it would be required to use them. But they are supposed to be used (if that makes sense).
//Personal Member Function Definitions #include <iostream> #include <string> #include "Personal.h" using namespace std; //constructor initializes Name, Address, DOB, and SSN Personal::Personal(string n, string a, int d, int s) { Name = n; Address = a; DOB = d; SSN = s; } //end Personal constructor //function to return the name of the person string Personal::getName() { return Name; }//end function getName //function to take in the address for the person //void Personal::setAddress(string a) //{ // a = Address; //}//end function setAddress //function to return the address of the person string Personal::getAddress() { return Address; }//end function getAddress //function to take in the DOB of the person //void Personal::setBirthday(int d) //{ // d = DOB; //}//end function setBirthday //function to return the DOB of the person int Personal::getBirthday() const { return DOB; }//end function getBirthday //function to take in the SSN of the person //void Personal::getSSN(int s) //{ // s = SSN; //}//end function getSSN //function to return the SSN of the person int Personal::getSSN() const { return SSN; }//end function getSSN
#include <iostream> #include <string> #include "Personal.h" //include definition of class Dividend //#include "Donation.h" //include definition of class Donation //#include "Income.h" //include definition of class Income using namespace std; //function main begins program execution int main () { string n, a; int d, s; cout << "\nPlease enter your name: "; cin >> n; cout << "Please enter your address: "; cin >> a; cout << "Please enter your date of birth: "; cin >> d; cout << "Please enter your social security number: "; cin >> s; Personal person(n, a, d, s); cout << "Income tax for year 2009: "; cout << "\nName: " << person.getName() << endl; cout << "Address: " << person.getAddress() << endl; cout << "DOB: " << person.getBirthday() << endl; cout << "SSN: " << person.getSSN() << endl; }//end main
|
https://www.daniweb.com/programming/software-development/threads/263967/personal-class-header-and-function-member-definitions
|
CC-MAIN-2018-43
|
refinedweb
| 512
| 54.15
|
Lennart Augustsson wrote: > Daniel Fischer wrote: > >> And could one define >> >> \f g h x y -> f (g x) (h y) >> >> point-free? > > Any definition can be made point free if you have a > complete combinator base at your disposal, e.g., S and K. > > Haskell has K (called const), but lacks S. S could be > defined as > spread f g x = f x (g x) Given (you guessed it) class Idiom i where ii :: x -> i x (<%>) :: i (s -> t) -> i s -> i t I tend to write instance Idiom ((->) r) where ii = const (<%>) rst rs r = rst r (rs r) or instance Idiom ((->) r) where ii = return (<%>) = ap The idiom bracket notation (implemented by ghastly hack) gives iI f is1 ... isn Ii = ii f <%> is1 <%> .. <%> isn :: i t when f :: s1 -> .. -> sn -> t is1 :: i s1 .. isn :: i sn The point is to turn higher-order/effectful things into first-order applicative things, so eval :: Expr -> [Int] -> Int eval (Var j) = (!! j) eval (Add e1 e2) = iI (+) (eval e1) (eval e2) Ii -- and so on The above is a bit pointwise, a bit point-free: the components of the expression get named explicitly, the plumbing of the environment is hidden. I get the plumbing for free from the structure of the computations, which I really think of as first-order things in the environment idiom, rather than higher-order things in the identity idiom. Thomas Jäger wrote: > Yes, me too. I think obscure point-free style should only be used if a > type signature makes it obvious what is going on. Occasionally, the > obscure style is useful, though, if it is clear there is exactly one > function with a specific type, but tiresome to work out the details > using lambda expressions. For example to define a map function for the > continuation monad > >>cmap :: (a -> b) -> Cont r a -> Cont r b Correspondingly, if I were developing the continuation monad, I'd probably write the monad instance itself in quite a pointy way, with suggestive (not to say frivolous) identifiers data Cont a x = Cont {runCont :: (x -> a) -> a} instance Monad (Cont a) where return x = Cont $ \ uputX -> uputX x ugetS >>= ugetTfromS = Cont $ \ uputT -> runCont ugetS $ \ s -> runCont (ugetTfromS s) $ \ t -> uputT t And then I already have the map operator, liftM. But more generally, if I wanted to avoid ghastly plumbing or overly imperative-looking code, I'd perform my usual sidestep instance Idiom (Cont a) where ii = return (<%>) = ap and now I've got a handy first-order notation. If I didn't already have map, I could write mapI :: Idiom i => (s -> t) -> i s -> i t mapI f is = iI f is Ii although mapI = (<%>) . ii is perhaps too tempting for an old sinner like me. My rule of thumb is that tunes should be pointwise, rhythms point-free. And you know the old gag about drummers and drum machines... Conor --
|
http://www.haskell.org/pipermail/haskell-cafe/2005-February/009119.html
|
CC-MAIN-2014-35
|
refinedweb
| 485
| 64.54
|
The Birthday Paradox is presented as follows.
…in a random group of 23 people, there is about a 50 percent chance that two people have the same birthdayBirthday Paradox
This is also referred to as the Birthday Problem in probability theory.
First question: What is a paradox?
…is a logically self-contradictory statement or a statement that runs contrary to one’s expectationWikipedia
What does that mean? A logically self-contradictory statement‚ means that there should be a contradiction somewhere in the Birthday Paradox. This is not the case.
Then a statement that runs contrary to one’s expectations, could be open for discussion. As we will see, by example, in this post, it is not contrary to one’s expectation for an informed person.
Step 1: Run some examples
The assumption is that we have 23 random people. This assumes further, that the birthday of each one of these people is random.
To validate that this is true, let’s try to implement it in Python.
import random stat = {'Collision': 0, 'No-collision': 0} for _ in range(10000): days = [] for _ in range(23): day = random.randint(0, 365) days.append(day) if len(days) == len(set(days)): stat['No-collision'] += 1 else: stat['Collision'] += 1 print("Probability for at least 2 with same birthday in a group of 23") print("P(A) =", stat['Collision']/(stat['Collision'] + stat['No-collision']))
This will output different results from run to run, but something around 0.507.
Probability for at least 2 with same birthday in a group of 23 P(A) = 0.5026
A few comments to the code. It keeps record of how many times of choosing 23 random birthdays, we will end with at least two of them being the same day. We run the experiment 10,000 times to have some idea if it is just pure luck.
The check if len(days) == len(set(days)) tests whether we did not have the same brirthday. If function set(…) takes all the unique days in the list. Hence, if we have two the same days days of the year, then the len (length) will be the same for the list and the set of days.
Step 2: The probability theory behind it
This is where it becomes a bit more technical. The above shows it behaves like it says. That if we take a group of 23 random people, with probability 50%, two of them will have the same birthday.
Is this contrary to one’s expectations? Hence, is it a paradox?
Before we answer that, let’s see if we can nail the probability theory behind this.
Do it step by step.
If we have 1 person, what is the probability that anyone in this group of 1 person has the same birthday? Yes, it sounds strange. The probability is obviously 0.
If we have 2 persons, what is the probability that any of the 2 people have the same birthday? Then they need to have the same birthday. Hence, the probability become 1/365.
How do you write that as an equation?
What we often do in probability theory, is, that we calculate the opposite probability.
Hence, we calculate the probability of now having two the same birthdays in a group. This is easier to calculate. In the first case, we have all possibilities open.
P(1) = 1
Where P(1) is the probability that given a group of one person, what is the probability of that person not having the same birthday as anyone in the group.
P(2) = 1 x (364 / 365)
Because, the first birthday is open for any birthday, then the second, only has 364 left of 365 possible birthdays.
This continues.
P(n) = 1 x (364 / 365) x (363 / 365) x … x ((365 – n + 1) / 365)
Which makes the probability of picking 23 random people without anyone with the same birthday to be.
P(23) = 1 x (364 / 365) x (363 / 365) x … x (343 / 365) = 0.493
Or calculated in Python.
def prop(n): if n == 1: return 1 else: return (365 - n + 1) / 365 * prop(n - 1) print("Probability for at no-one with same birthday in a group of 23") print("P(A') =", prop(23))
Which results in.
Probability for at no-one with same birthday in a group of 23 P(A') = 0.4927027656760144
This formula can be rewritten (see wikipedia), but for our purpose the above is fine for our purpose.
The probability we look for is given by.
P(A) = 1 – P(A’)
Step 3: Make a graph of how likely a collision is based on a group size
This is great news. We can now calculate the theoretical probability of two people having the same birthday in a group of n random people.
This can be achieved by the following code.
from matplotlib import pyplot as plt def prop(n): if n == 1: return 1 else: return (365 - n + 1) / 365 * prop(n - 1) X = [] Y = [] for i in range(1, 90): X.append(i) Y.append(1 - prop(i)) plt.scatter(X, Y, color='blue') plt.xlabel("Number of people") plt.ylabel("Probability of collision") plt.axis([0, 90, 0, 1]) plt.show()
Which results in the following plot.
Where you can see that about 23 people, we have 50% chance of having a pair with the same birthday (called collision).
Conclusion
Is it a paradox? Well, there is no magic in it. You can see the above are just simple calculations. But is the following contrary to one’s expectation?
6 weeks are 6*7*24*60*60 seconds = 3,628,800 seconds.
And 10! = 10*9*8*7*6*5*4*3*2*1 = 3,628,800.
Well, the first time you calculate it might be. But does that make it a paradox?
No, it is just a surprising fact the first time you see it. Does it mean that seconds are related to faculty? No, of course not. It is just one strange thing that connects in a random way.
The same with the Birthday Paradox, it is just surprising the first time you see it.
It seems surprising for people that you only need 23 people to have 50% chance of a pair with the same birthday, but it is not a paradox for people that work with numbers.
|
https://www.learnpythonwithrune.org/birthday-paradox-by-example-it-is-not-a-paradox/
|
CC-MAIN-2021-25
|
refinedweb
| 1,060
| 74.69
|
>> comments to cells in your spreadsheet.
Step 1:
Add the necessary namespaces to your GcExcel project.
Step 2:
Initialize the workbook. Add a worksheet and a range of data to it.
Step 3:
In this example, we’ll add a comment to the cell that includes the lowest weight. In this case, it’s E6, where the weight is 58 kilograms.
Step 4:
Create a comment at cell E6.
Step 5:
To keep the comment opened when a user opens the Excel file, set the visible property of the comment to true.
Step 6:
Finally, save the workbook to see the comment. It’s visible and added to the E6 cell.!
|
https://www.grapecity.com/blogs/how-to-add-comments-to-your-dot-net-spreadsheet-gcexcel
|
CC-MAIN-2019-22
|
refinedweb
| 111
| 85.89
|
Details
Description
String s = "5"
int x = s
The effect of this code is that x is assigned the value of char '5'.
String s = "10"
int x = s
This produces a runtime exception.
If auto-conversion is absolutely needed between 1-character Strings and ints, then one would hope it would at least be limited to literal Strings. This behavior makes no sense.
Activity
Is there a downside to your second solution ("forbid the handling of Strings of length 1 as chars without explicit conversion")?
You could resolve literals at parse time, whenever possible. So:
int x = "5"
would be the same as int x = '5' in Java, but
int x = foo
would fail at runtime whenever foo is a String of any length.
there are always downsides... for example old code will behave different. More weight does have that it will not be possible to do something like this:
def foo(char c){}
foo("a")
This must fail then, because "a" is a String and no char.
adding special handling for
int x="5"
is no good idea. People normally don't care much about String constants and String references when coming from Java. And it will be hard to explain them, that
int x="5"
behaves different from
def foo(){"5"}
int x = foo()
I don't have any good ideas, so I guess I'll close this issue. I'm assuming that's OK - if it's the assignee's job to close it I apologize.
this is no bug, this is the intended behavior. That is because groovy doesn't know characters, but handles characters as Strings of length 1. The ways I see to solve the problem above is to introduce chars or to forbid the handling of Strings of length 1 as chars when not explicitly converted to chars before.
String s = "5"
int x = s
would then throw an runtime exception.
String s = "5"
char c = s
int x = c
would not.
String s = "10"
int x = s
and String s = "10"
char c = s
would both cause an exception during runtime.
BTW, after the String is created, there is no way to know it was a liteal String.
|
http://jira.codehaus.org/browse/GROOVY-1299
|
CC-MAIN-2015-11
|
refinedweb
| 367
| 79.8
|
CodePlexProject Hosting for Open Source Software
Hi,
I need to write a number of Python scripts to automate some Microsoft Office tasks using Word, Powerpoint, etc. When using Pythonwin, I can get completion when writing
import win32com.client as win32
word = win32.gencache.EnsureDispatch('Word.Application')
doc = word.Documents.Add()
word.Visible = True
rng = doc.Range(0,0)
rng.InsertAfter('Hacking Word with Python\r\n\r\n')
In other words, I can get code completion by typing word.<TAB>. Strangely, this does not work with Python tools for Visual Studio 2010. Am I missing something, or is this not designed to work this way?
Thanks!
Do you see other win32* modules after you type "import "? Using Enthought's Python distribution which includes win32com I see win32api but not win32com which may be the reason this isn't working for you - that's definitely a bug and I've opened
it here:.
But I wouldn't expect the completion to work against the word object because we won't know the members it has. We could consider adding a feature to directly support intellisense agianst COM objects when using win32 though. Is there an IDE that
you use where that does work? On the other hard it could work in the REPL where we can use live objects for completion.
Thanks for feedback - I've opened a feature request for this (). I don't know that we'll get to this anytime too soon but it's a great idea and I don't want
to lose track of it.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
http://pytools.codeplex.com/discussions/252441
|
CC-MAIN-2017-34
|
refinedweb
| 296
| 74.79
|
Making the Reactive Queue Durable with Akka Persistence
Making the Reactive Queue Durable with Akka Persistence
Join the DZone community and get the full member experience.Join For Free
Some time ago I wrote how to implement a reactive message queue with Akka Streams. The queue supports streaming send and receive operations with back-pressure, but has one downside: all messages are stored in-memory, and hence in case of a restart are lost.
But this can be easily solved with the experimental
akka-persistence module, which just got an update in Akka 2.3.4.
Queue actor refresher
To make the queue durable, we only need to change the queue actor; the reactive/streaming parts remain intact. Just as a reminder, the reactive queue consists of:
- a single queue actor, which holds an internal priority queue of messages to be delivered. The queue actor accepts actor-messages to send, receive and delete queue-messages
- a Broker, which creates the queue actor, listens for connections from senders and receivers, and creates the reactive streams when a connection is established
- a Sender, which sends messages to the queue (for testing, one message each second). Multiple senders can be started. Messages are sent only if they can be accepted (back-pressure from the broker)
- a Receiver, which receives messages from queue, as they become available and as they can be processed (back-pressure from the receiver)
Going persistent (remaining reactive)
The changes needed are quite minimal.
First of all, the
QueueActor needs to extend
PersistentActor, and define two methods:
receiveCommand, which defines the “normal” behaviour when actor-messages (commands) arrive
receiveRecover, which is used during recovery only, and where replayed events are sent
But in order to recover, we first need to persist some events! This should of course be done when handling the message queue operations.
For example, when sending a message, a
MessageAdded event is persisted using
persistAsync:
def handleQueueMsg: Receive = { case SendMessage(content) => val msg = sendMessage(content) persistAsync(msg.toMessageAdded) { msgAdded => sender() ! SentMessage(msgAdded.id) tryReply() } // ... }
persistAsync is one way of persisting events using akka-persistence. The other,
persist (which is also the default one), buffers subsequent commands (actor-messages) until the event is persisted; this is a bit slower, but also easier to reason about and remain consistent. However in case of the message queue such behaviour isn’t necessary. The only guarantee that we need is that the message send is acknowledged only after the event is persisted; and that’s why the reply is sent in the after-persist event handler. You can read more about
persistAsync in the docs.
Similarly, events are persisted for other commands (actor-messages, see
QueueActorReceive). Both for deletes and receives we are using
persistAsync, as the queue aims to provide an at-least-once delivery guarantee.
The final component is the recovery handler, which is defined in
QueueActorRecover (and then used in
QueueActor). Recovery is quite simple: the events correspond to adding a new message, updating the “next delivery” timestamp or deleting.
The internal representation uses both a priority queue and a by-id map for efficiency, so when the events are handled during recovert we only build the map, and use the
RecoveryCompleted special event to build the queue as well. The special event is sent by akka-persistence automatically.
And that’s all! If you now run the broker, send some messages, stop the broker, start it again, you’ll see that the messages are recovered, and indeed, they get received if a receiver is run.
The code isn’t production-ready of course. The event log is going to constantly grow, so it would certainly make sense to make use of snapshots, plus delete old events/snapshots to make the storage size small and recovery fast.
Replication
Now that the queue is durable, we can also have a replicated persistent queue almost for free: we simply need to use a different journal plugin! The default one relies on LevelDB and writes data to the local disk. Other implementations are available: for Cassandra, HBase, and Mongo.
Making a simple switch of the persistence backend we can have our messages replicated across a cluster.
Summary
With the help of two experimental Akka modules, reactive streams and persistence, we have been able to implement a durable, reactive queue with a quite minimal amount of code. And that’s just the beginning, as the two technologies are only starting to mature!
If you’d like to modify/fork the code, it is available on Github.
Published at DZone with permission of Adam Warski , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/making-reactive-queue-durable
|
CC-MAIN-2020-29
|
refinedweb
| 789
| 51.58
|
Agenda
See also: IRC log
<trackbot> Date: 16 July 2008
Hello Everyone,
<fjh> Scribe: Konrad Lanz
fjh: Introducing himself - work for Nokia, chairing this group, was chair of previous XML Security Specifications Maintenance WG. Participated in original XML Signature and Encryption working groups and XKMS. Active in OASIS, including the Board and SAML TC.
brich: intro ...
SC: intro ... working for Nokia, on SAML OpenID ...
bal: intro ... XMLSEC, WSS, ...
hal: intro ... WSS, WS-SX, SSTC - Co-Chair, Oasis Technical Advisor ...
tlr: intro ,,, team contact, means I'm your man in W3C ...
klanz2: ... XML Toolkit @ IAIK/SIC
jcc: upc ... standardization
csolc: five years in the area with adobe
gerald: client of XMLDSIG ...
sean: intro ... SUN, XML sec implementions, JSR105 ...
@all: please augment where needed ...
RESOLUTION: Dinner @21:00, all are coming
rdmiller: intro ... MITRE
Supports US Dept. of Defense, daily contact with XML and
XMLSEC, user perspective and best practices pperspective
... update crypto, NSA suite B
magnus: inro ... working for RSA, standardization PKCS
<rmiller> silence
setting up again
<tlr> yes, we got dropped
<tlr> sorry
lost the bridge
fjh: minutes @ every
meeting
... on the irc chat
... notes during the meeting, you are encouraged to augment and correct them
... minutes are public
...
... minutes are in general public, n
... but we might make them private until approved
... part of the job of scribing is cleaning the minues at the end
fjh: its cumbersome to move minutes around from private to public
klanz: member-list
tlr: yes, the member list, ...
RESOLUTION: Scribe will post the minutes once edited to member-list and as soon as approved to the public-list
Subject: [minutes-draft], [minutes-approved] to be used ...
klanz2: we can then use the list searc features to list all the minutes ...
<fjh> scribe instructions
fjh: volunteer for scribing,
....
We will share scribing round robin in the WG, apart from the Chair and Team contact.
Wed morning (16 July am) - Konrad
Wed afternoon (16 July pm) - Hal
Thursday morning (17 July am) - Bruce
Thursday afternoon (17 July pm) - Sean
hal: leaving tomorrow ...
brich: thursday morning
sean: thursday afternoon
fjh: one hour to little, need two hours
<fjh>
RESOLUTION: Tuesdays 10am ET, two hours
fjh: one more F2F, tech planary
colocated
... 20-21. Oct. 2008
... What joint meeting do we need?
... EXI, XML Core,
klanz: namespace inheritance
-> xml core
... enveloping signatures
<klanz22> hal: encapsulation
<scribe> ACTION: fjh to arrange joint meetings on the coordination call [recorded in]
<trackbot> Created ACTION-4 - Arrange joint meetings on the coordination call [on Frederick Hirsch - due 2008-07-23].
fjh: telco starting on time, ...
we start on time ... try to be on time
... charter, do we need the infoset, what to do with C14n, doe we need transforms ...
hal: need to be aware of interdependencies and conflicting goals
fjh: we need to take advantage of
members as resource for editing, actions etc ....
... maintaining issues lists
... workshop results last year, went into requirements ...
that one ?:
hal: ECC SuiteB, (IPR ... ), no
one from NIST or NSA here ?
... Encryption and Signature in hardware?
rdmiller: have contact into both areas, re SuiteB and hardware
<trackbot> ACTION-27 -- Robert Miller to contact crypto hardware and suiteB experts in NSA regarding XML Security WG and possible involvement -- due 2008-08-08 --OPEN
<trackbot>
bal: even if do not get direct
involvement, we hope we can obtain feed back ...
... on request.
hal: heart beat requirement?
tlr: draft every three month for each deliverable
bal: Don Eastlake? IETF?
hal: Encryption not an RFC ...
tlr: minutes, we value
availability over perfection
... vCal availiable for tracker items ... there is a feed
<fjh> can enter action-# to get link to it
<fjh> action-001
<tlr> action-001?
<trackbot> ACTION-1 -- Thomas Roessler to test trackbot-ng -- due 2007-04-12 -- CLOSED
<trackbot>
NOTE: Update the association with the new Workgroup, and associate Products
<tlr> COI policy
<sean> ack
general discussion on IPR
tlr: WG notes are not covered by the IPR policy
brich: did we have any under the maintenance group?
tlr: test cases, best practices ...
hal: distinction between public review and WG issues raised?
fjh: process wise different
... external comments will be discussed ... internal one have to be specific ....
... we need to more formal to get get more review ...
tlr: use working relations and formal contact where suited ...
hal: there is a difference between getting plain feedback vs. formal feed back from other groups that might not even be existence any more ...
<scribe> ACTION: fjh to check how the formal OASIS liasion is working. [recorded in]
<trackbot> Created ACTION-5 - Check how the formal OASIS liasion is working. [on Frederick Hirsch - due 2008-07-23].
hal: the conflict of interest policy is section 3.1.1 W3C process ...
<tlr> needs update, incidentally. That's an action on me. I suspect.
<anil> zamkim, code?
fjh: home page simple, if you
want to enhance please do so its in cvs
... we should get a wiki, wiki didn't work to good in the past
... volunteers for main page?
... tracker, lists issues and actions ...
<jcc> FH; something that we did not used: tool for creating new issues
<anil>
<anil> example ^^^
<jcc> Link:
<jcc> FH: certain basic rules for new issues, including meaningful information categories
<jcc> details in
<jcc> actually in
fjh: issues lists is a good tool to move issues through states
<tlr> ISSUE: tracker doesn't get its e-mails through
<trackbot> Created ISSUE-2 - Tracker doesn't get its e-mails through ; please complete additional details at .
fjh: we need a volunteer to take responsibility of making sure external issues get on the list
Gerald: Volunteered to take care of issue Tracking
fjh: Thanks
<Zakim> anil, you wanted to mention that the spec can be updated at places with issue numbers and dealt with as and when completed
<rmiller> Rob Miller is going offline and will not return until tomorrow morning.
<fjh> Pratik has been working on best practices, interested in streaming
fjh: versioning policy constrains us
work on xml enc is limited to dsig compatability and algs
updates to c14n will be jointly issued by us and xml core in order to retain IPR commitments
members of the wg are encouraged to nominate other groups who we should coordinate with
thomas to act as informal liasion with IETF
hal, jcc & fjh will liaise with OASIS TCs
bruce to informally liaise with WS-Fed
need to add ebxml tcs to list of OASIS TCs
sean to investigate ebxml liasion
<scribe> ACTION: sean to investigate ebxml liasion [recorded in]
<trackbot> Created ACTION-6 - Investigate ebxml liasion [on Sean Mullan - due 2008-07-23].
<scribe> ACTION: bruce to informally liaise with WS-Fed [recorded in]
<trackbot> Created ACTION-7 - Informally liase with WS-Fed [on Bruce Rich - due 2008-07-23].
<anil> I am getting involved in some healthcare security standard groups (no one in particular)
hal & fjh to liaise with WS-I BSP
will use workshop mailing list to communicate with interested parties
bruce & sean to liaise with Java community
klanz: need to tradeoff between maint and major changes
... need requirements discussion first
hal: could do low impact items first, but risk of not driving adoption of later step
sean: can have actions on wg members to provide proposals on different areas
fjh: need to focus on reqs
sean: tag with risk level
fjh: do best practices and maint in parallel
bal: whan we gather reqs will see
a break btw simple and hard
... then we can decide tactics
... worry about task force idea
... relatively small group
fjh: make easy decisions up front
bal: will be pressure to produce
short term spec
... will be easier to get impls
tlr: have ability to split or join specs
fjh: want to defer this for now
fjh: principles and
requirements
... valuable exercise to go through ...
... walking through slide with original requirements ...
... design for security and mitigate attacks ...
... some workshop feed-back shows that there was a *lot* of balancing going on ...
... maybe solve through profiling ...
... revisit extensibility requirements ...
... interoperability and compatibility are important, and new since we're talking about Vnext ...
... should recognize layered architecture of implementations ...
... I probably missed some principles ...
<tlr>
RESOLUTION: have a list of principles as basis for work
bal: needed both principles and usecases
klanz: may find things which are
incompatible with principles
... principles SHOULD be followed
bal: principles may be in conflict
hal: propose 4 categories: security, performance, new features, operational errors
fjh: how should we process workshop papers?
bal: create reading groups
<bal> and schedule a few
workshop papers/presentations for discussion each week during
the conf call
... review batch for each call to generate issues and suggestions
klanz: possibility of requesting profile of xslt?
<tlr> XSL is being chaired by Sharon Adler, IBM
<tlr>
klanz: noted that might need xslt transform to be able to sign including the whitespace generated by transform
bal: xsl came in as a part of web
arch
... need to take a look at actual use
... maybe need to drop things which cause security problems
... may not need to carry forward all requirements from orginal dsig
klanz: most of our customers use XSLT
<EdS> XSLT can also be used as a means to collect and meld data from a variety of sources before hashing.
<fjh> review original requirements of dsig
bal: RDF was a requirement at W3C at that time
<pdatta> can you share the URL for this original requirements document
<fjh>
bal: 3.2-4 was a reaction to CMS
limitations
... 3.2 supports compound documents
<tlr> look at pkcs1 in 6.4.2
<tlr> it includes an identifier for the hash algorithm
<tlr> (rsa-sha1 algorithm)
general uncertainty about purpose of 3.3 point 3; likely interpretation: data in XML Signature takes precedence over data in crypto blob
hal: notes support for derived keys in various ws* specs, should consider those requirements and attempt to unify
hal: use cases?
magnus: not really there, indeed
brich: derived keys that WS-SecureConversation makes use of
... can proposal be extended to cover use cases there?
... are that will have to be done sooner or later
magnus: do not see why not; maybe take this conversation offline
hal: specs using derived keys are wss username
token, ws-trust, ws-securitypolicy
... and ws-secureconversation
brich: bulk in secure conversation
not latest:
fjh: editor per spec vs. editor
team
... should use XMLSPEC
... need to set up properly to use ant
... compatable with any XSLT stream
... already have editors for best practices
<tlr> ACTION: thomas to read this action's number [recorded in]
<trackbot> Created ACTION-8 - Read this action's number [on Thomas Roessler - due 2008-07-23].
<scribe> ACTION: gerald to test Issues entry and list generation [recorded in]
<trackbot> Sorry, couldn't find user - gerald
<scribe> ACTION: tlr to fix Tracker [recorded in]
<trackbot> Created ACTION-9 - Fix Tracker [on Thomas Roessler - due 2008-07-23].
RESOLUTION: No call
on July 22nd or 5 August.
... No call on Aug 5
<tlr> for context:
<klanz2>
<klanz2> reviewing 8.1.1 - 8.1.3 : A quote from 8.1.3: Some applications might operate over the original or intermediary data but should be extremely careful about potential weaknesses introduced between the original and transformed data.
RESOLUTION: Accept Best Practices as a Work Item, based on previous work
bal: need to consider best practices for new specs
<bal> and whether some of these turn into a processing model for applications verifying sigs
RESOLUTION: Pratik to continue editing best practices document
konrad: does best practice require implementation experience?
hal: should be sure it works
<scribe> ACTION: fjh to update wg page to include issues link [recorded in]
<trackbot> Created ACTION-10 - Update wg page to include issues link [on Frederick Hirsch - due 2008-07-23].
bruce: put non-normative info in back of spec, could have best practices there as well
tlr: process, once approved add to errata document, but non-normative until new edition published
... decide on update of REC when appropriate, enough docs
... not update REC or red-line at this time
<fjh> WG should review the errata and we will decide whether to approve on next call
<fjh> document section link
<fjh> issue link
<klanz2>
<klanz2>
|
http://www.w3.org/2008/07/16-xmlsec-minutes.html
|
CC-MAIN-2014-52
|
refinedweb
| 2,020
| 63.39
|
import excel file data into Mijoshop Open Cart
This project received 31 bids from talented freelancers with an average bid price of $155 USD.Get free quotes for a project like this
Skills Required
Project Budget$30 - $250 USD
Total Bids31
Project Description
Hi,
I need someone to import these products from my supplier into my Mijoshop Open Cart site. Note the products in the spreadsheet are not in proper format so it may need to be modified. Not sure if there is a way to automate or add all the options needed.
If your price is based on per item then i may not do all the items so please specify in your message and bid. It is1800 items
|
https://www.freelancer.com/projects/Data-Entry-Excel/import-excel-file-data-into/
|
CC-MAIN-2016-22
|
refinedweb
| 120
| 76.86
|
Java Examples - Display week number of the year
Advertisements
Problem Description:
How to find which week of the year, month?
Solution:
The following example displays week no of the year & month.
import java.util.*; public class Main { public static void main(String[] args) throws Exception { Date d1 = new Date(); Calendar cl = Calendar. getInstance(); cl.setTime(d1); System.out.println("today is " + cl.WEEK_OF_YEAR+ " week of the year"); System.out.println("today is a "+cl.DAY_OF_MONTH + "month of the year"); System.out.println("today is a "+cl.WEEK_OF_MONTH +"week of the month"); } }
Result:
The above code sample will produce the following result.
today is 30 week of the year today is a 5month of the year today is a 4week of the month
java_date_time.htm
|
http://www.tutorialspoint.com/cgi-bin/printversion.cgi?tutorial=javaexamples&file=date_weekday.htm
|
CC-MAIN-2015-18
|
refinedweb
| 124
| 60.61
|
Debugging Hedge
Contents
As you might want to look over the shoulder of hedge while it is working in order to understand better what it does you can choose between two different debuggers:
PuDB
WinPdb
pudb
PuDB is a terminal window based python debugger. You don't need to click windows any more.
Install
Download the latest version of PuDB and unzip it by
tar xvfz pudb-0.90.4.tar.gz
and install it by
sudo python setup.py install
or
sudo easy_install pudb
Getting Started
To start debugging, simply insert:
python -m pudb.run yourfile.py
into the piece of code you want to debug, or run the entire script with:
from pudb import set_trace; set_trace()
There's also a screencast to get you started.
WinPdb
The easiest way is to install the newest version from the source package on the Winpdb homepage on your local computer. Actually the aptitude packages you can get throughout the linux package repository might not be the newest version which will cause trouble when working on the hpcgeek cluster.
Getting Started
To receive debug informations from a certain point of your code you have to add the following line to your example
import rpdb2; rpdb2.start_embedded_debugger_interactive_password()
and start the computation as usual. When being asked for the password type an easy-to-remember password and start Winpdb. Attach the debugger (File => Attach). Retype the password and ignore the warning. By using F7 you can step forward in the code.
Debugging Hedge running on hpcgeek
To debug hedge running on the hpcgeek cluster you have to use a ssh tunnel. Therefore you can either modify your .ssh/config in the following way. Example:
Host hpcgeek* User example_user ProxyCommand ssh ccv nc hpcgeek 22 LocalForward 51000 localhost:51000
The other option is to build the ssh tunnel whenever you need it. Therefore type the following commands in your bash shell after logging in on hpcgeek:
cat ~C -L51000:localhost:51000 [ctrl]-[c]
|
http://wiki.tiker.net/Hedge/HowTo/HedgeDebug?action=diff
|
CC-MAIN-2014-15
|
refinedweb
| 328
| 63.39
|
Comment on Tutorial - Creating Functions in VB.net By Steven Holzner
Comment Added by : Billyweimb
Comment Added at : 2017-07-17 14:22:07
Comment on Tutorial : Creating Functions one tell me whether to add any external ja
View Tutorial By: Abhi at 2010-02-23 05:25:29
2. Its really good.....
View Tutorial By: Shivam at 2009-04-02 03:58:51
3. Hi, you put a semicolon after "int main()&quo
View Tutorial By: noel4037 at 2009-12-28 06:40:17
4. read and write methods given here are for a file w
View Tutorial By: Tavva Yaswanth at 2012-10-28 08:02:10
5. import java.util.Scanner;
public class Swit
View Tutorial By: srinath at 2013-04-28 10:17:24
6. when i copy paste your program or source code
View Tutorial By: Zordan at 2011-03-17 04:28:08
7. your code helps a lot.....
thanx......
View Tutorial By: beh at 2009-08-19 06:06:20
8. nice...cleared this example....
View Tutorial By: Waqas Mughal at 2010-03-13 21:21:46
9. Very good example with explaination and sample cod
View Tutorial By: plenitude at 2010-04-15 23:55:02
10. the above code works fine for. May someone help wi
View Tutorial By: T.T at 2012-02-23 06:22:36
|
https://www.java-samples.com/showcomment.php?commentid=41557
|
CC-MAIN-2021-21
|
refinedweb
| 227
| 68.67
|
I have a OO framework of several modules, and dozens of programs that make use of these modules. In each module I always start with the same lines.
use strict;
use Data::Dumper;
use Carp;
[download]
If it is use'd once (f.e. in my constructor class, which is called by every program in its ISA structure), I should be able to call
Dumper by referring to Data::Dumper::Dumper,isn't it.
What is the best practice for this?
I have (not yet) any long running daemon.
I hope this question is clear enough.
---------------------------
Dr. Mark Ceulemans
Senior Consultant
BMC, Belgium
There is no execution speed impact - loading the code and calling Data::Dumper->import(), Carp->import in multiple locations at compile-time has no runtime implications. By importing the Dumper(), carp(), confess(), etc functions into multiple namespaces you may be occupying more memory than necessary but that doesn't mean it is slower.
/s
Yes
No
A crypto-what?
Results (171 votes),
past polls
|
http://www.perlmonks.org/?node_id=264605
|
CC-MAIN-2014-10
|
refinedweb
| 167
| 74.19
|
/>
So, your child gets 78% in both physics and history. Both pretty good grades but as the reader of geeky blogs like this you believe the sciences are more important than the humanities and would have preferred your child to do better in physics than history.
However, we are not necessarily comparing like with like here: 78% in one subject is probably not equivalent to 78% in another. Rather than the absolute percentages we need to calculate and compare the Z-Scores which take into account the averages and ranges of the entire set of scores.
The Z-Score equivalents of percentage scores measure the difference between the mean percentage and the individual scores in units of standard deviation. (The standard deviation can be thought of as a measure of how much, on average, the individual scores vary from the mean.)
As an example, if the mean is 60 and the standard deviation is 10, the Z-Scores of 50%, 60% and 70% would be -1, 0 and 1 respectively.
The formula for calculating Z-Scores is as follows, where μ is the arithmetic mean (the "average" in everyday usage) and σ is the standard deviation.
Calculating Z-Scores
Z = (x - μ) / σ
For this project I will use two sets of fictitious grades with means and standard deviations of:
We can see that the average physics score is a lot lower than the average history score so 78% in physics is already looking a lot better than the same percentage in history. Let's calculate the Z-Scores of 78% for each subject using the formula above.
Z-Scores of 78% in Physics
Z = (78 - 64.8) / 13.03 = 1.01
Z-Scores of 78% in History
Z = (78 - 73.6) / 8.77 = 0.5
So your kid is half a standard deviation above average in history but more than a whole standard deviation above average in physics. Yippee!
Coding
In this project I will write a simple function which takes a list of numbers and returns a dictionary containing the following:
- The arithmetic mean of the data
- The standard deviation of the number
- A list of dictionaries containing the original data values and their equivalent Z-Scores
I'll also write a few lines of code to test the above function and print the results.
The project consists of the following two files which can be downloaded in a zip, or you can clone/download the Github repository if you prefer.
- zscores.py
- zscores_test.py
Source Code Links
Let's look at zscores.py first.
zscores.py
import statistics def calculate(data): """ Returns a dictionary containing: The arithmetic mean of the data The population standard deviation of the data A list of dictionaries containing each data value and its corresponding Z-Score. """ arithmetic_mean = statistics.mean(data) standard_deviation_population = statistics.pstdev(data) zscores = [] for item in data: zscore = (item - arithmetic_mean) / standard_deviation_population zscores.append({"Value": item, "Z-Score": zscore}) result = {"arithmetic_mean": arithmetic_mean, "standard_deviation_population": standard_deviation_population, "zscores": zscores} return result
Firstly we import statistics for its mean and pstdev functions. (You can of course use from statistics import mean, pstdev and just use the function names without the statistics. prefix, but my personal preference is to do it the way shown in the code.)
Next we set a couple of variables with the mean and standard deviation of the data, and create an empty list.
Then we iterate the data, calculating the Z-Score and then adding a dictionary containing the original value and its Z-Score to the list. You could combine the two lines but that would end up a long and messy bit of code.
Finally we combine the mean, standard deviation and Z-Scores list into a dictionary and return it. That's zscores.py finished so let's move on to zscores_test.py.
zscores_test.py
import zscores def main(): print("-----------------") print("| codedrome.com |") print("| Z-Scores |") print("-----------------\n") physics_results = [38,40,43,43,49,54,55,57,61,62,62,63,64,64,65,66,66,67,68,68,69,75,76,78,78,79,80,82,85,87] history_results = [53,55,58,58,64,68,69,69,69,70,70,72,76,76,77,77,77,77,78,79,79,79,79,80,80,81,81,83,86,88] physics_zscores = zscores.calculate(physics_results) history_zscores = zscores.calculate(history_results) print_zscores("Physics", physics_zscores) print_zscores("History", history_zscores) def print_zscores(subject, zscores): """ Print the mean, standard deviation and z-scores in the zscores dictionary in a grid format. """ width = 28 print("-" * width) print("| {:^24} |".format(subject)) print("-" * width) print("| Mean {:>12.2f} |".format(zscores["arithmetic_mean"])) print("| Std.Dev. {:>12.2f} |".format(zscores["standard_deviation_population"])) print("-" * width) print("| Scores | Z-Scores |") print("-" * width) for item in zscores["zscores"]: print("|{:>12.2f}| {:>12.2f}|".format(item["Value"], item["Z-Score"])) print("-" * width) main()
After importing the zscores module we enter the main function, hard-coding a couple of sets of scores and then throwing them at zscores.calculate before finally printing the results with the print_zscores function.
The print_zscores function is a bit fiddly but quite straightforward, printing out the results of zscores.calculate in a table. Now let's run the program with:
Run
python3.7 zscores_test.py
The output is:
Program Output (partial)
----------------- | codedrome.com | | Z-Scores | ----------------- ---------------------------- | Physics | ---------------------------- | Mean 64.80 | | Std.Dev. 13.03 | ---------------------------- | Scores | Z-Scores | ---------------------------- | 38.00| -2.06| | 40.00| -1.90| | 43.00| -1.67| . . . | 76.00| 0.86| | 78.00| 1.01| | 78.00| 1.01| | 79.00| 1.09| | 80.00| 1.17| | 82.00| 1.32| | 85.00| 1.55| | 87.00| 1.70| ---------------------------- ---------------------------- | History | ---------------------------- | Mean 73.60 | | Std.Dev. 8.77 | ---------------------------- | Scores | Z-Scores | ---------------------------- | 53.00| -2.35| | 55.00| -2.12| | 58.00| -1.78| . . . | 77.00| 0.39| | 78.00| 0.50| | 79.00| 0.62| . . . | 83.00| 1.07| | 86.00| 1.41| | 88.00| 1.64| ----------------------------
I have only shown part of the output but the two 78% scores we are interested in are shown in yellow (this is just for the post - the program output is all one colour) to demonstrate that the Z-Scores for 78% are as shown in the calculations carried out earlier.
|
https://www.codedrome.com/z-scores-in-python/
|
CC-MAIN-2020-34
|
refinedweb
| 1,027
| 52.9
|
Spring Integration: a central service and message bus (32 messages)
SpringSource has announced the creation of Spring Integration, a project designed to provide a central service and message bus inside the Spring Framework. This builds on Spring's already-impressive capabilities for providing simple models for using services. The benefit is that a Spring configuration can manage all of the communication protocol, such that the service barely has to know it's a service for an ESB at all. An example is given on Mark Fisher's blog entry discussing the announcement:@MessageEndpoint(input="inputChannel", defaultOutput="outputChannel") public class SimpleAnnotatedEndpoint { @Handler public String sayHello(String name) { return "Hello " + name; } }Note the simplicity of the sayHello() method: there are no message protocols involved, and it's using simple and common (and extremely testable) input and output mechanisms. Spring Integration is a logical next step for Spring, as it already provides services for JMS, remoting, scheduling, email, lifecycle management, transaction management, event publication and subscription, and transaction management. In a way, Spring Integration is an extension of all of these features, adding a generalized message subscription and publication mechanism, instead of relying somewhat on a JMS template or an email template, for example. It's scheduled for 1.0 in Q2 2008.
- Posted by: Joseph Ottinger
- Posted on: December 17 2007 04:42 EST
Threaded Messages (32)
- Very usefull by Robert Bakic on December 17 2007 06:42 EST
- Re: Spring Integration: a central service and message bus by Horia Muntean on December 17 2007 07:55 EST
- Really? by Robert Smith on December 17 2007 10:54 EST
- Re: Really? by Andy Leung on December 17 2007 19:50 EST
- Re: Really? by ss ss on December 17 2007 08:01 EST
- Re: Really? by Robert Smith on December 17 2007 10:03 EST
- Re: Really? by Mark Fisher on December 17 2007 10:39 EST
- Re: Really? by Frank Bolander on December 18 2007 11:13 EST
- Re: Spring Integration: a central service and message bus by Kirstan Vandersluis on December 17 2007 17:45 EST
- SpringIntegration/XAware usefullness for our application by vimarsh vasavada on December 19 2007 10:58 EST
- Re: SpringIntegration/XAware usefullness for our application by Kirstan Vandersluis on January 04 2008 05:51 EST
- Good. by R H on December 17 2007 19:24 EST
- Existing ESB by test test on December 18 2007 01:10 EST
- Re: Existing ESB by Joris Kuipers on December 18 2007 05:26 EST
- Re: Existing ESB by Alef Arendsen on December 18 2007 05:57 EST
- Re: Existing ESB by Alef Arendsen on December 18 2007 05:59 EST
- Spring Batch by Rod Johnson on December 20 2007 16:49 EST
- Re: Spring Batch by Roland Altenhoven on December 20 2007 11:40 EST
- "I want my ESB" Syndrome ? by Guido Anzuoni on December 18 2007 05:23 EST
- Testable code by Tim Ferguson on December 18 2007 10:29 EST
- The Un-ESB? by Kirstan Vandersluis on December 18 2007 12:18 EST
- IS anybody care about the name? by shaji nair on December 18 2007 16:53 EST
- Smart move! by John Davies on December 19 2007 22:27 EST
- Re: Smart move! by Robert Smith on December 20 2007 08:59 EST
- Re: Smart move! by Rod Johnson on December 20 2007 01:37 EST
- Re: Smart move! by Mark Fisher on December 20 2007 06:29 EST
- Re: Smart move! by John Davies on December 23 2007 07:55 EST
- There is nothing wrong with making money by Jack Woods on December 23 2007 08:02 EST
- Re: Smart move! by Roland Altenhoven on December 24 2007 02:22 EST
- Camels, Mules and Kangaroos by Roland Altenhoven on December 20 2007 00:57 EST
- Just read through the reference guide... by Peter Simonetti on February 05 2008 17:21 EST
- cool by clear the clutter Hoff on March 15 2011 15:25 EDT
Very usefull[ Go to top ]
It's a good thing that PtoP and P&S channels will also be available in Spring. It looks like the EBP (event-based processing), a feature available in the Azuki Framework. Azuki's EBP. However, you should also consider using Azuki, as it also provides a straightforward weaving tool to define communication endpoints.
- Posted by: Robert Bakic
- Posted on: December 17 2007 06:42 EST
- in response to Joseph Ottinger
Re: Spring Integration: a central service and message bus[ Go to top ]
Is this related in any way with Apache Camel () ?
- Posted by: Horia Muntean
- Posted on: December 17 2007 07:55 EST
- in response to Joseph Ottinger
Really?[ Go to top ]
As a Spring user since 2003, I'm actually confused by this project. I've always bought into the thinking that Spring rarely creates new solutions when there are already good solutions in place. ORM (Hibernate/IBatis/Toplink/JPA) is an obvious example, as is scheduling (Quartz/Java Timer). I'd say the Spring MVC framework would be a counter-example since things like Struts and Webwork (and really countless others) already existed, but hey, seemed like everyone was making their own MVC framework back then. So I'm not sure how this isn't competing with a lot of other existing solutions, including open-source options like Mule and ServiceMix. I understand that Spring already provides a lot of capabilities that are commonly associated with an ESB (isn't that what this is - a "central service and message bus"?), but why not work with something like Mule so that Mule takes better advantage of existing Spring capabilities? I thought Mule already had some tight integration with Spring - couldn't that just be extended further so that Mule dosen't have to provide things that Spring already does? I guess I'd just like to hear the thinking that resulted in the creation of Spring Integration, and why it was decided not to better integrate with an existing ESB.
- Posted by: Robert Smith
- Posted on: December 17 2007 10:54 EST
- in response to Joseph Ottinger
Re: Really?[ Go to top ]
No offense, but I just feel that you probably have not had time to spend on Spring Framework enough to discover the advantages of it. Take a look at Spring MVC closer, it is not just URL mapping, Model mapping and Error handling; it is, rather complicated, capable to allow developers to build sophisticated web-flow that clearly separates the view and action models. For Struts, it is still good but it is more or less a quick way of doing small scale web site only. The most you get from Spring Framework is not the libraries in it, it's about how the framework tights things together and put an abstraction on top that builds a unique framework for you to build however the way you want. Yeah we know we can download all components separately, like Quartz, JPA, Hibernate or even Struts (you can integrate Struts inside Spring) but as I said, the best part of Spring is the engine that builds a framework for you so you can focus close to 100% on your business model and minimize your time spending on assembling components and its configurations.
- Posted by: Andy Leung
- Posted on: December 17 2007 19:50 EST
- in response to Robert Smith
Re: Really?[ Go to top ]
That's true. I move to Spring MVC after using struts for 3 years. I feel Spring MVC is very easy to learn, and it enables you to write good code. It is powerful framework that lets you customize most of the stuff very easily.
- Posted by: ss ss
- Posted on: December 17 2007 20:01 EST
- in response to Andy Leung
Re: Really?[ Go to top ]
- Posted by: Robert Smith
- Posted on: December 17 2007 22:03 EST
- in response to Andy Leung
No offense, but I just feel that you probably have not had time to spend on Spring Framework enough to discover the advantages of it.Well, I'd actually say that I know Spring MVC inside and out, and I agree with your points. However, it still is different from much of the core Spring framework in that it competes with existing alternatives as opposed to integrating with them. And since reasons can be given for why Spring MVC exists, then it'd be good to hear the reasons for why Spring Integration is being created instead of leveraging what's already been done on something like Mule or Camel. After all, the Camel website says it is a "Spring based Integration Framework". We now have "Spring Integration". Isn't there reason to wonder about the obvious potential overlap?
Re: Really?[ Go to top ]
Rob,. The Struts vs. Spring MVC comparison here is actually quite interesting in that the Spring team has provided multiple options for integrating Struts 1.x based web-tiers with a Spring service layer - from a simple ActionSupport base class that offers convenient access to the Spring ApplicationContext to a delegating request processor that enables full Spring dependency injection for Struts actions. Nevertheless, most Spring MVC and Web Flow users will agree that completely embracing the Spring programming model in the web tier provides immense value beyond merely injecting dependencies. The fact of the matter is that Spring MVC and Web Flow were designed by the Spring team, and they have evolved alongside core Spring and other Spring portfolio products. If you look at recent developments in the Spring Web stack, such as annotation-driven request mapping and first-class JSF integration, it is increasingly evident that these technologies allow Spring users to harness the full potential of the Spring programming model in the web tier. As the Spring Integration product evolves, I believe you will see the same benefits in the integration space. I also believe that with time, those who currently provide integration products that collaborate with Spring will actually see this as a positive move from their perspective as well - and we do look forward to working with them. Our view is wholly positive - Spring already offers a huge amount of support in the enterprise integration arena, but we can provide even more value - in this case extending the Spring model into the domain of Enterprise Integration Patterns. Whereas other products may leverage Spring, their teams' primary focus will always be on their own product - and understandably so. Our focus on the other hand will be on *Spring* - whether it be Spring Integration, the Spring Framework core, or other products within the Spring portfolio. I spoke with many developers at The Spring Experience this past week who were excited about this new offering for that very reason. Again though, I think it's important to recognize the potential of this new product - not only for those developers who are excited about using it, but also to increase the level of integration possible for those who use other products as well. These motivations are not mutually exclusive, and the recognition of that has always been one of the greatest strengths of the Spring model. -Mark
- Posted by: Mark Fisher
- Posted on: December 17 2007 22:39 EST
- in response to Robert Smith
Re: Really?[ Go to top ]
- Posted by: Frank Bolander
- Posted on: December 18 2007 11:13 EST
- in response to Mark Fisher
Rob,So, Spring is going from a "lightweight" framework to an application server?.
Re: Spring Integration: a central service and message bus[ Go to top ]
The information is sketchy, but if you read Mark Fisher's blog and the articles he links to, one core ESB feature. I wonder where Mark would position this project in the integration landscape... I haven't seen anybody from SpringSource compare this project to the common integration technologies like ESB, EII, or ETL. Is the intent to replace or augment these? I'm the project lead at XAware.org, an open source, real-time data integration project. We already use Spring for core processing capabilities like resource management, data source connectivity, security, and management. It *seems* like the data services we produce would work well with the ESB-like features in Spring Integration. But it would be nice to hear the intent and direction from Mark and company.
- Posted by: Kirstan Vandersluis
- Posted on: December 17 2007 17:45 EST
- in response to Joseph Ottinger
SpringIntegration/XAware usefullness for our application[ Go to top ]
Hi Kirstan and everyone, This project sounds interesting..i also looked at XAware ..i am new to integration technology landscape and have few queries .. We have a product to be deployed for N different customers..each is a distinct installation and they are not related. For every installation our product needs to integrate with a customer back-end systems. The customer back-end systems may speak SOAP, XML/Http,.Net Remoting, Propritary protocols etc. Our product typically plays a role of Service Consumer rather then a Service Producer..the interaction is as shown below : [MyDesktopClient]<--> MyProtocol/http <--> [Product/J2EE]<-- protocol X1/X2..Xn---> [customer system] So ,basically from MyProtocol to X1/X2..Xn is required.. Now can we embed spring-integration or XAware engine or any-other-recommended into our Product to solve this kind of problem... Can we do this with ZERO-CODING efforts? If you point me to appropriate docs/demos then also its ok.. Warm Regards, Vimarsh
- Posted by: vimarsh vasavada
- Posted on: December 19 2007 10:58 EST
- in response to Kirstan Vandersluis
Re: SpringIntegration/XAware usefullness for our application[ Go to top ]
Vimarsh, sorry for the delayed reply, just getting back from vacation! While I would need more information to tell for sure, it sounds like a viable solution to embed xaware.org features into your application. There are a number of companies that have done exactly this. The typicial model is that the application developer wants to hide the complexity of the deployed environment behind a set of services, which are then implemented using XAware. The application then calls the XAware-based services using one of the supported technologies like Java API, SOAP, JMS, or simple HTTP. Deployment to a customer site then involves mapping those service definitions to the actual interfaces available. The mapping can occur without coding for many of the data sources - JDBC/Data sources, SOAP, XML/HTTP. Some of the technologies may require more work. .Net Remoting, for example, would require another access package like J-Integra () or JNBridgePro (). For a brief technology overview (flash), you can look at. We also have a bunch of flash tutorials on the site. Of course, please feel free to contact me directly if you'd like more information (kirstan at xaware dot com).
- Posted by: Kirstan Vandersluis
- Posted on: January 04 2008 17:51 EST
- in response to vimarsh vasavada
Good.[ Go to top ]
I'm very happy about this. As a long time Spring user, I've been trying to figure out what integration style I want to focus on and have looked at everything from Service Mix to Mule to simple remoting/JMS, to Spring Web-Services, to full blown commercial ESBs.
- Posted by: R H
- Posted on: December 17 2007 19:24 EST
- in response to Joseph Ottinger
Existing ESB[ Go to top ]
- Posted by: test test
- Posted on: December 18 2007 01:10 EST
- in response to Joseph Ottinger
Spring Integration is a logical next step for Spring, as it already provides services for JMS, remoting, scheduling, email, lifecycle management, transaction management, event publication and subscription, and transaction managementI do understand why Spring would not just pickup one of the existing ESBs. Mule is a really good solution. Why spend time developing one from the ground up? I felt the same way about SpringBatch. Why not pickup Quartz and add modules to do batching? Am I missing something?
Re: Existing ESB[ Go to top ]
- Posted by: Joris Kuipers
- Posted on: December 18 2007 05:26 EST
- in response to test test
I felt the same way about SpringBatch. Why not pickup Quartz and add modules to do batching? Am I missing something?This is a common misconception: Quartz and Spring Batch have different goals. Quartz schedules a given piece of work, Spring Batch will help you to define that actual piece of work. Combining Quartz (or any other scheduler, also commercial ones) with Spring Batch is therefore very common: Spring Batch does not provide any scheduling capabilities by itself. This is also in the FAQ: In larger corporations, a commercial scheduler is typically already in place to start jobs on different machines and platforms using distributed agents: depending on any given scheduling solution, including Quartz, would severely limit the applicability of Spring Batch in such environments. Joris Kuipers Senior Consultant at SpringSource
Re: Existing ESB[ Go to top ]
I think Mark has already provided some great insight into the question of where to position Spring compared to other offerings. I have to say I have had mixed feeling with ESBs in the past. In some application I have had the need to start using the Enterprise Integration Patterns described in Gregor Hohpe's book, but I've never felt the need to add in an ESB. I think that is where Spring Integration is different and that's where it's key differentiator in. The way Spring Integration makes you look at the concept of an ESB has taken away the negative associations with the term ESB that I previously had. I think the one central point to take away here is the fact that our idea and vision has always been to put the application in a very central position and build on top of that. This means the application programming model and its non-invasive nature are what you build the most part of your application with, only referencing any external environment where needed and if you do so, do it in a non-invasive way as possible. On a core level, this means you use dependency injection and AOP to build the fundamentals of your application. Using facilities like init-method or the new @PostConstruct you declare certain things to lifecycle management features to be enabled. If now, you need remoting, you can use one of Spring's exporter (HttpInvokerServiceExporter, RmiServiceExporter) to export your bean to a remote endpoint. If you need transaction management, you start declaring certain methods to be transactional (using XML or @Transactional annotatations). The above happens all in more or less the same declarative, non-invasive way and using the same consistent Spring programming model. Back to Spring Integration. Spring Integration essentially does the exact same thing. It looks at the existing Spring programming and answers the question: 'how can we bring the Enterprise Integration Patterns to this already fully fleshed out programming model'. At first sight there is a slight difference between this and asking 'how can be have our application run inside an ESB', but this slight difference is pretty fundamental IMO. Adrian Colyer gave a keynote at The Spring Experience last week where he described how Spring supports different application styles. If you need a tiny bit of batch, you use the Spring programming model and the add the batching functionality to your existing application. If you need a tiny bit of remoting, you export components from your existing application to remote endpoints. If you need a bit of web functionality, you just deploy an DispatcherServlet component and off you go. If now, you need a little ESB-like behavior consistent with the Enterprise Integration Patterns, you just add in a little Spring Integration. One application with multiple styles. Other than that, as Mark already said, this does in no way mean that third parties cannot add value to the Spring programming model in ways they would like to do so. We're already seeing that with a lot of other vendors (take fore example GigaSpaces). regards, Alef
- Posted by: Alef Arendsen
- Posted on: December 18 2007 05:57 EST
- in response to test test
Re: Existing ESB[ Go to top ]
I do work for SpringSource... I should have added a disclaimer.
- Posted by: Alef Arendsen
- Posted on: December 18 2007 05:59 EST
- in response to Alef Arendsen
Spring Batch[ Go to top ]
Sergey
- Posted by: Rod Johnson
- Posted on: December 20 2007 16:49 EST
- in response to test test
I felt the same way about Spring Batch. Why not pickup Quartz and add modules to do batching? Am I missing something?Spring Batch addresses a different problem from scheduling, although many batch users use Quartz and other scheduling solutions. Spring Batch grew out of the realization by Accenture and SpringSource that there were many common problems in batch that were solved over and over again in different projects. As Accenture use Spring heavily, it was natural to look to a Spring-based solution for those problems. If you look at the Spring Batch home page, you'll see many objectives that are simply not addressed in other products. To quote from that page: simple enough data model or a small batch size Technical Objectives * Batch developers use the Spring programming model: concentrate on business logic; let the framework take care of infrastructure. * Clear separation of concerns between the infrastructure, batch container and the batch application. * Provide common, container services as interfaces that all projects can implement. * Provide simple and default implementations of the container interfaces that can be used ‘out of the box’. * Easy to configure, customize, and extend services, by leveraging the spring framework in all layers. * All existing container services should be easy to replace or extend, without any impact to the infrastructure layer. It's our goal that Spring Batch and Spring Integration will work very naturally together. Remember that people questioned the necessity for Spring itself several years ago (why not just use Struts?) until they looked at it closely enough to realize that it was something new. Rgds Rod
Re: Spring Batch[ Go to top ]
Business Scenarios: I personally think that Processing Languages how e.g. BPEL 2.0 are existing solutions which are destinated to rule and govern the Business Processes in Service-oriented and Event-driven Architectures. Business Processes are in many cases very distinct from technical processes, needing the rich tools and technologies to describe the rules and the flow in a flexible, abstract, standard way and has potencially activations over a long period with continually receiving events. I know that the Spring solutions are excellent products - so I'm very anxious for the announced Spring Integration and their features. Maybe after the availability of the first stable release we can see, which of the features indicates positive distinctions to existing solutions and provides advanced potentialities to solve the existing or future integration problems. Roland
- Posted by: Roland Altenhoven
- Posted on: December 20 2007 23:40 EST
- in response to Rod Johnson
"I want my ESB" Syndrome ?[ Go to top ]
SOAP is not a so appeasing precedent. Guido
- Posted by: Guido Anzuoni
- Posted on: December 18 2007 05:23 EST
- in response to Joseph Ottinger
Testable code[ Go to top ]
I have worked on several projects using commercial ESB's and one of the most difficult parts of every one of those projects was figuring out the best way to unit test and do first level integration tests without the framework fully present. There are many tricks and patterns you can use and I won't go into those but they never felt like they were a complete solution. On my current project we are using Spring, where one of the core concepts is creating testable standalone POJO code. I look forward to these concepts being brought to the ESB world and actually being able to write easily tested code. Tim Ferguson xaware.org
- Posted by: Tim Ferguson
- Posted on: December 18 2007 10:29 EST
- in response to Joseph Ottinger
The Un-ESB?[ Go to top ]
The information I've read so far tells me this new project has key features you typically find in an ESB. But I applaud Mark and Alef for not trying to over-hype it by actually calling it an ESB. Instead, they state they are implementing Enterprise Integration Patterns described in Gregor Hohpe's book. I've blogged about this here: I've also found Gregor's web site that describes these patterns here: I love the fact that SpringSource is implementing well-defined patterns. While it will be obvious to some which of the patterns are implemented in Spring Integration, it would be great if Mark or Alef published a list of patterns they intend to deliver with project.
- Posted by: Kirstan Vandersluis
- Posted on: December 18 2007 12:18 EST
- in response to Joseph Ottinger
IS anybody care about the name?[ Go to top ]
Spring guys you call the product what ever the name you guys want As long as it make the same quality and good enough to slap at the face of some CIO, CTO direct talk agent channel it is GOOD. An ESB should be a framework to provide me flexible, configurable, adaptable framwork to establish transaction aware Transformation, Translation, Wiring, Routing (different protocol, format and layout) of messages based on a common message content such as XML. Those who know JAVA A collection of POJO with supporting tools can do it. However most CTO and CIO do not know JAVA, as long as that model continue, most BIG vendor can sell their GARBAGE to them, Then CTO and CIO Ship that to India or China for cheap labor using average hardworking middle class consumer's money. So again spring guys bring it on and let us do a party on release. Thanks
- Posted by: shaji nair
- Posted on: December 18 2007 16:53 EST
- in response to Kirstan Vandersluis
Smart move![ Go to top ]
This is a seriously clever move by SpringSource in many ways. Reading some of the other comments already posted above, some people are asking why create something new when there are perfectly good implementations out there already. I can only think that these people must believe that open source software just grows on SVN trees or CVS bushes. SpringSource is a company that have employees and those employees have families and mouths to feed. SpringSource is a business and separating their dependency from other ESBs might hurt some slightly but it will add huge value to the Spring brand. Calling it "integration", which is what it is rather than following marketing trends towards TLAs like SOA, JEE, ESB etc. is brave but beautifully simple, this isn't an app server, it's not a bus and it's not pretending to provide a architecture for services, it's just doing what everyone needs, integration. Any decent architect can piece together TLAs from the essentials ingredient of "integration". Nice one guys, 2008's going to be a big year for you. -John-
- Posted by: John Davies
- Posted on: December 19 2007 22:27 EST
- in response to Joseph Ottinger
Re: Smart move![ Go to top ]
- Posted by: Robert Smith
- Posted on: December 20 2007 08:59 EST
- in response to John Davies. Mark - thanks for the useful explanation above. I checked your blog - any more news on when the 0.5 code will be available?
Re: Smart move![ Go to top ]
Rob
- Posted by: Rod Johnson
- Posted on: December 20 2007 13:37 EST
- in response to Robert Smith.We believe that what we have something to add in this space. The fact that numerous existing products heavily promote being "Spring-based" or "Spring integration" indicates the central role of Spring in this space, and the importance of seamless Spring integration. We are offering something different to existing products and are hopeful that they will choose to work to add value over Spring Integration as they have done to date over the Spring Framework. It is important to note that the Spring Framework has always had an integration capability, and that this direction is consistent with that. Also, as the Spring Portfolio has grown (with Spring Web Services, Spring Batch etc.) it makes more and more sense for a higher-level integration programming model to be part of Spring itself. The question is whether or not Spring Integration proves to be a good product and whether it solves problems for users. I'm excited about Mark's ideas and think that its programming model (which I was involved in defining) will be innovative. If people don't agree, they don't need to use it. Spring has never been in the business of me-too projects, and we aren't going to start. Rgds Rod
Re: Smart move![ Go to top ]
- Posted by: Mark Fisher
- Posted on: December 20 2007 18:29 EST
- in response to Robert Smith
any more news on when the 0.5 code will be available?Rob, I will be posting another blog entry tomorrow to walk through a couple samples while also providing the link for anonymous SVN checkout. I have been snowed under this week (figuratively and literally). Thanks for the patience! -Mark
Re: Smart move![ Go to top ]
- Posted by: John Davies
- Posted on: December 23 2007 07:55 EST
- in response to Robert Smith
The point I was trying to make that either I didn't express correctly or you misunderstood was that open source is a business, particularly in the case of Spring, Camel and Mule. It's not just something people donate for the good of mankind. I know Ross who created Mule, James and Rob who created Camel and Rod and Adrian from Spring very well, they're all good drinking buddies of mine (and none of them are American incidentally). Ross is running MuleSource with Dave and they're charging money for license and services, that's where Ross and Dave's salaries come from. James was lucky enough to have been acquired by IONA who pay his salary but the business (of IONA) still need to see generated revenue from Mule and Fuse in order for James to remain on the payroll. Finally Rod pays himself from SpringSource, it pays for his flight, hotels, drinks, wife and children. If Rod (and his colleagues) just sit back and let Mule and Camel take the money for the implementations while SpringSource just charge for the framework there's a limit to how long this can last. The VCs and potential acquirers will not see the same value in SpringSource without productising things like Integration, Batching and perhaps OSGi. With SpringSource's move into providing an implementation they clearly separate themselves from the other two. While this might not seem logical if they were paid salaries for doing something other than open source but business-wise it's a very smart move. It's very difficult to make money from open source, it's only the top brands that make money and the best way is still the acquisition exit plan as demonstrated by JBoss. JBoss never really made much money, they didn't before acquisition and they didn't afterward, they so bad at it that Red Hat was recently downgraded from "buy" to "hold" because of JBoss sales (of lack there off). To make money you have to have something that clearly differentiates you from the general noise and/or be behind a standard or following such as Spring, OSGi etc. -John-.
There is nothing wrong with making money[ Go to top ]
Everyone needs to feed their family and themselves. There is nothing wrong with making money as long that means to getting the money is morally right. With respect to the open source projects only the best products survive. I wish all the best to smart and hard working people out there.
- Posted by: Jack Woods
- Posted on: December 23 2007 20:02 EST
- in response to John Davies
Re: Smart move![ Go to top ]
After all, please - we will not forgot, that the work of the most Open Source Projects are not based on single persons, but rather on many, many peoples which are still and hard working in the background. Many of those people are knowing, that are never receive any money for their engagement. Maybe, the fact, that are involved in the evolution of a great work, which ist very distinct to the form of their daily working-parts, the liberty and the team play, are satisfaction enough for their participation. Many, many thanks to such peoples which are involved in Open Source Projects. I personally think, that the modern Information Technologies - are not in the same advanced state, without the progress which we have received from many of those Open Source Projects over the last years. Sorry, for my simple English. Merry Christmas. Roland SOA Competence Network
- Posted by: Roland Altenhoven
- Posted on: December 24 2007 02:22 EST
- in response to John Davies
Camels, Mules and Kangaroos[ Go to top ]
This year I have taked a closer look to some nice animals how e.g. Camels and Mules - very great animals, which fantastic characteristics. Now, for the next year I have planned to take also a closer look to an other nicely animal: The Kangaroo, which is very dynamic - his principal characteristics are: to spring very fast and efficiently from each site to other ;-) --- Roland SOA Competence Network - Merry Christmas and a Happy New Year to All - Feliz Navidad y Próspero Año Nuevo para Todos - Schöne Weihnachten und ein gutes und erfolgreiches neues Jahr für Alle.
- Posted by: Roland Altenhoven
- Posted on: December 20 2007 00:57 EST
- in response to Joseph Ottinger
Just read through the reference guide...[ Go to top ]
I haven't coded anything with this yet, so I may be jumping the gun, but, at first look, this is just too good to be true. Simple, consistent, and highly-applicable to a number of situations. We have also looked at ESB's in the past, but we haven't needed everything an ESB offers. Instead, we've needed only the "skeleton" in order to implement our own, light-weight message framework. And here it is. Great job!
- Posted by: Peter Simonetti
- Posted on: February 05 2008 17:21 EST
- in response to Joseph Ottinger
cool[ Go to top ]
- Posted by: clear the clutter Hoff
- Posted on: March 15 2011 15:25 EDT
- in response to Joseph Ottinger
thanks for the info its realy been helpful to me
|
http://www.theserverside.com/news/thread.tss?thread_id=47868
|
CC-MAIN-2017-17
|
refinedweb
| 5,751
| 58.52
|
Setting up a demo CalDAV account on iPhone OS 3.0
By Arnaudq-Oracle on Jun 18, 2009
As you probably know, the new iPhone OS 3.0 is CalDAV enabled.
The default configuration panel is very simple (server name, user name, password), but it makes some assumptions that may be valid for a production system but not for a demo server:
- use of standard ports (443 or 80),
- ssl is the default,
- the account url follows a fixed pattern: http(s)://<server name>/principals/users/<user name>/
Demo servers usually run on non standard port numbers and they do not always own the full namespace, leading to account urls (actually principal url) that look more like :<user name>/
Typing this kind of url can be very tedious and error prone, especially given that the advanced configuration panel offers just a tiny text box.
Here is the simplest way that I have found so far to make the process a little bit less painful, assuming that you have a mail account configured already.
1) email the principal url to yourself
... from your regular desktop/laptop email client of course. Check that the url is valid (using a regular browser) before sending it.
The principal url will vary from servers to servers. It is the same that you may have configured if you are using the Apple iCal client (iCal --> Preferences --> Accounts --> <your CalDAV account> --> Server Settings --> Account URL).
2) copy the url from the iPhone Mail App
Go to the iPhone Mail App and open the email.
Press and hold on the url in the message.
You should be asked whether you want to Open or copy the link:
Select copy.
3) go to the CalDAV account creation panel
Settings --> Mail, Contacts, Calendars --> Add Account... --> Other --> Add CalDAV Account
4) Enter the server info
Tap on the Server field. A "Paste" button should appear on top of the text field:
Press on "Paste". The full url is shown:
This is the only trick, really: the client accepts a full url in the server name field.
5) Enter the User Name and password
Go to the User Name field. The full principal url is replaced by the server name only. This is OK:
Finally, enter your password and tap "Next" --> the client indicates "Verifying CalDAV account", then "Account verified".
You can now go to the Calendar application.
|
https://blogs.oracle.com/arnaudq/tags/configuration
|
CC-MAIN-2015-48
|
refinedweb
| 396
| 69.72
|
How to creat a morph dial from morph presets and dforms?
edited October 2012 in Daz Studio Discussion
I've dialled in a character and added a d-form here and there to help shape the head, but I don't know how I can create a single morph dial for the presets + forms. Didn't find anything helpful in the video tutorial section.
I'm guessing I can do this using GenX, but there must be a way to do this using Studio too, otherwise how did DAZ make the V5, M5 and Gorilla dials?
Thanks in advance for any help.
Post edited by cridgit on
One option would be to zero the pose, but not the morphs or DForms, and export as OBJ, then load that as a Morph. But of course that can't be shared with others since it embeds the morphs. You could also spawn one or more morphs from the DForms using the button on the DForm pane, then use ERC Freeze to create a single slider that sets all the morphs at once.
Wao,, I wrote this ,, but if I make mistake,, so I read again,, or I thought if Mr cridgit try more difficult thing what I think,,
now richard reply,, so I relive me^^;
about share,, I know if make one FBM with otehr product morphs, I can not share it.
but why DAZ present user more free shape morph which can castomized and share free?t
though I make new morph from them,
it can not work without genesis, and it can be good commercial for ds and genesis. I believe.
===========================================
mm,, Mr cridgit you are real pose master and meta-data teacher,, @@;
frankly say,, so I can not believe you can not make morph and controller,,^^;
If you want make morph and controller for the shape of genesis which you made in ds 4.5
(applied preset, and use D-form or set many dials and mixed morphs,
it seems same process)
just export the shape of genesis as obj, and reimport it by morphloader pro to zero genesis.
(of course you need set genesis mesh resolation level to zero,, befoer export and reimport )
the controller can change from zero genesis to the shape you made (there need any more D-former,,)
I belive it can work though you made the shape by D-former and preset,,
or I may try same thing and stuck picture?
if your new shape is so far from base genesis, and you need to adjust rigging
you can use adjust rigging to the shape,, and ERC freeze. (I often do it for make my FBM from many FBMs)
hope to keep rig movements by JCMs you may need ERC freeze.
check R.Kane tutorial, or serch by "ERC freeze" in this forum.
or you can copy it from one you applied the preset(if they move bones witch erc freezed for each FBM)
to the genesis which shaped by your new morph. by figure riggng transfer ,
and ERC freeze.
I think there is many tutorial about how to make FBM, it is same how you make the shape.
Thanks Richard. I do want to distribute the morph, so if I use ERC to create single slider for the spawned morphs as well as the dialled Genesis Evolution morphs, can I distribute that single morph slider?
Hi kitakoredaz
Thank you for your explanation: from Richard's post and your step by step guide, I will spawn morphs for d-forms then use ERC freeze to combine morphs into 1 morph. But I don't know if I can distribute the single morph.
For export/import as OBJ it seems I cannot distribute this way, and its mainly a head morph so luckily I do not need to adjust rigging (I've never done that before).
I'll look for the FBM tutorial and hopefully that will also tell me how to make a distributable morph.
Thanks very much for your helpful response.
Hi cridgit... *waves*
If you use the method that Richard described, ERC Freeze, than you should be able to distribute it okay...
I'm still at work right now, so this is from memory.... I can check for sure later tonight when I get home....
Spawn your Dforms into one or more individual morphs....then when you do the ERC Freeze, create a new control property in the Freeze options. This will control all the morph sliders, yours and the dialed ones.
Find your slider in the Parameters pane or wherever you placed it, and test, watching that it does indeed set the right values on all of the morphs. Also test your spawned morph(s) to verify that it/they do not include any of the dialed morphs deltas.
Then save the morph asset checking off only your spawned morph(s) and your control morph, and save to your author/product folder.
In the data folder you should have .dsf's saved for your spawned morph(s) and your controller. The .dsf for the controller should have no vertex delta info in it, only the info for the slider values.
Again this was off the top of my little head, if I goofed I'm sure somebody will correct me... :-)
I'll double check when I get home later...
nicci... :)
I understand your prolbem.
normally I do not care distributable or not. (because I think always, to gather many morph as one new FBM or PBM,
it is more easy )
I am not master at all about ERC and D-former, but yes you can do,,
if you move your controller, it move slider of each character controller, and your new PHM made by some D-forms at same time.
I try to make easy monster which mixed two morphs and PHM mady by two D-former to head and mouth change.
then gather it to one controller which can move every controller. (not gather it one morph,, it can not be distributed I think,,)
1, I make new shape by "Farie" 0.75 and adjust shape by "Fitness ditails " 0.5"
then ,save the shape as character preset. (for safety if I forget the each value,,) as my musclefairy.duf
if I apply it to zero genesiss , it must move Farie to 0.75 Fitness 0.5 as you know ^^
2. first check the preset apply it to zero genesis, then I tweak the shape by new D-former from the shape.
(if apply character preset,, it often difficult which parameter has changed, so it is better discribe
what controller you moved,, because you can not find them in current used category,,
it the preset change the value of each morphs,,)
3 I set two D-former and make each D-former controller by spawn morph.
then set each parameter range, and defautl value to zero by parameter setting tab (small gear click,,ad you know)
picture 1
4a if you need to save each d-former you need to save each D-former as morph.
it is same when you save your made morph . after save it as morph,
there is no difference how you made it. tweaking 3d modelor, or made it by D-former,
now it is just FBM or PBM which have controller to change shape.
so in this case you make new two PHM for each parts.
4b if you want to gather two morphs which you made by D-former, (there seems many way, but if I were you)
export the current shape (applied each characer morph, and changed by D-formers how you want)
as obj.
then in ds 4.5 , turned only D-former values to zero. (it means genesis keep shape which made
other character morphs)
import the morph obj which you exported, by morph loader pro,, (I cholse hexagon bridge,, )
you must need to check on Reverse defomation!!
now you have new PHM in moprh loader section.
but it can change from the character shape you made by other character morphs
to the evil face you made by D-former. (of coure you can apply this new PHM to zero genesis, but
it never make "evil fairly shape" ,, if they did not apply the each character falue,,)
I recommend , save the PHM as morph file on this stage.
(I do not know how you wanto to set category for this PHM^^;
so just save it which can find in morph loader pro section.)
after that I open new scene and apply the saved character preset, then change PHMevilfairly (which I saved)
zero to 1 for check it. OK it works well.:)
=======================================================
then,, I do not know you can distribute,, if you made one controller and
it can turn each character morph value and the new your original
PHM value at same time. but,, I am not distributer,, so make it.;-P
you need really it?
open property editor.
then select genesis, (if I select genesis first, stop long time,, so I open propertyeditor first,,)
first I make empty controller to move every morph. on moprh loader category.
select morph/morphloader section in propertyeditor, then right click > create property.
then set property for your controller, I set name as ctrlTrueEvilFace,, (not good name I know TT;)
you need to find the new ctrlTrueEvilFace in left category pane and right Hieralchy pane of property editor.
1. you need to move Faire 0.75
the true name is FBMFaire , you drug it from left pane to right Hieralchy section , under the
ctrlTrueEvilFace sub-componetns section.
ERC type delta add, sclalar 0.75.
2 then need to move FitnessDettails 0.5 as you know. (you can easily serch morphs if you apply preset for character morphs,
which has * marked ,,)
the true name is FBMFitnessDetails
drug it from left pane to right Hieralchy section, under the ctrlTrueEvilFace >sub componet
ERC type delta add, scalar 0.5
3 I know,, anext you need to move PHMevilfairly you made.
ERC type delta add. scalar 1.
4 then move the ctrlTrueEvilFace and check it!
5 you can check and set it by parameter editor too. (check sub componets what you assigend for the controller)
6 It is controller but I need to save it as morph I think. so save it as morph.
you just check default value and current value turn to zero. (do not use save modified assets!!)
and just save the ctrlTrueEvilFace only. at this time. you need not save other morphs .
you better same product name before you saved the PHM for D-form
7 after save it, load new scene and load new genesis, check your ctrl, and check currently Used
and how move moprhs and your figure change shape.
I hope,, it is what you want@@;
Hi...
Okay so I'm home now and ran through the steps for a ERC Freeze of dialed morphs plus spawned DForm morphs... It works just as it should...
I spawned all the DForms as a single morph, and with other face morphs dialed in I did the ERC Freeze and created a new property. Checked over the morphs listed in the pane and then completed the freeze.
In the Parameters pane, I selected Currently Used to see only the morphs I dialed plus my morph and controller. I ran the controller back and forth and verified that all the sliders moved. With the controller slider set to full, I dialed down the slider for my DFormer spawned morph to verify that all of the dialed morphs were intact. I also turned off the controller morph, setting all sliders to zero, and dialed in my DFormer morph to verify that it didn't include any of the dialed morphs...
Then I saved using File > Save As > Support Asset > Morph Asset(s), and checked only my spawned morph and controller morph.
I then opened the new .dsf files to verify their contents. The spawned morph .dsf listed only 1702 deltas which is about right for my DFormers, and the controller .dsf only listed the formulas for all of the morphs that where used.
Everything is controlled from a single slider, and using the Property Editor you can locate it which ever group you want.
You can also now save as a Character preset to dial your slider from the Content Library, or in your case, make Meta-Data for Smart Content... ;-)
I would also suggest making a list of the morph sets for the morphs that you used. That way if you decide to share it, you can let users know which morphs sets are required to use your morph.
@kitakoredaz
Yep, that's how you would do it for imported .obj morphs based on top of dialed morphs...
If you need anything else....
nicci... :)
HI,, now I think if I download the products,I may hope to move each morphs for each part too :-)
so if it is better (for me^^;), save each morph (D-form morph) ,
after spawn D-form first,
then make controller to move both parts at same time.
(it never change value other product morphs) then save ctrl as morph..
after save each D-former as morphs,
fhe steps how to make controller (property)is same when make one controller to move every morph. I think.
maybe,,, the products files are,
1 INJ character preset file, (which turned every morphs value for making the new character shape =
save as character preset shape only)
2 several D-former morphs to change each parts. (save each D-former as morphs)
3 controller which can move every D-former morphs at same time.( not move other character morph value)
4 I do not know if it is need or not,
remove preset file, which turn only D-former original morphs value to zero . (save as character preset,,)
it means, remove original morphs only. ( it may turnd to the genesis shape which made other product morphs too )
I think it is important for user , with documentation,,as nicci said which product morphs value have changed,,
when applied character preset, though some product not mention when they change genesis female,
male, or base morphs which stuck with genesis as free)
I often made mistake before,,,, applied character file and tweak other ctrl properties, forget to turn other morph value to zero,
then save modified assets ^^ then,, load new genesis,, genesis change somelike female shape and face,,;
Hi nicci *waves back* and kitakoredaz
Thank you both so much for your tremendous help. Unfortunately I'm on an early flight tomorrow and will be on the road again for a bit, but when I get back I'm going to try finish this morph (its for distribution).
Where would I be without you two? :-)
In kitakoredaz's second screenshot, at the bottom-right note the Save With option. You want that to be set to the controller, not the sub-compoenents as in the image, so that the ERC links get saved in your morph's DSF file.
Ah,, I know,, so sorry!!
first I made these morph and save ,check it, then make picture again ><;<br /> so I forgot important thing,,
yes if do not save "with Controller" , and save With sub-componetn
it will make ERC code in your FBMfiles ><; forgive me,, please,,</p>
(but if you do not save these files or, save modified assets, ^^;
when load genesis , the ctrl can not work. it lost the ERC for each sub conponent..I think,,
so need to set option save with "controller"
for each FBM (these are sub componets of new ctrl)
and save the ctrl only. thanks and sorry @@;
Hi...
Have a good trip cridgit... don't work to hard... ;-P
Sorry kitakoredaz... I missed that part on the link to the controller in your screen shot... :red: Richard to the rescue... :cheese:
nicci... :)
|
http://www.daz3d.com/forums/viewthread/10762/
|
CC-MAIN-2016-40
|
refinedweb
| 2,638
| 78.18
|
Local mode simulates a Storm cluster in process and is useful for developing and testing topologies. Running topologies in local mode is similar to running topologies on a cluster.
To run a topology in local mode you have two options. The most common option is to run your topology with
storm local instead of
storm jar
This will bring up a local simulated cluster and force all interactions with nimbus to go through the simulated cluster instead of going to a separate process. By default this will run the process for 20 seconds before tearing down the entire cluster. You can override this by including a
--local-ttl command line option which sets the number of seconds it should run for.
If you want to do some automated testing but without actually launching a storm cluster you can use the same classes internally that
storm local does.
To do this you first need to pull in the dependencies needed to access these classes. For the java API you should depend on
storm-server as a
test dependency.
To create an in-process cluster, simply use the
LocalCluster class. For example:
import org.apache.storm.LocalCluster; ... try (LocalCluster cluster = new LocalCluster()) { //Interact with the cluster... }.
The
LocalCluster is an
AutoCloseable and will shut down when close is called.
many of the Nimbus APIs are also available through the LocalCluster.
DRPC can be run in local mode as well. Here's how to run the above example in local mode:
try (LocalDRPC drpc = new LocalDRPC(); LocalCluster cluster = new LocalCluster(); LocalTopology topo = cluster.submitTopology("drpc-demo", conf, builder.createLocalTopology(drpc))) { System.out.println("Results for 'hello':" + drpc.execute("exclamation", "hello")); }.
Because all of the objects used are instances of AutoCloseable when the try blocks scope ends the topology is killed, the cluster is shut down and the drpc server also shuts down.
Storm also offers a clojure API for testing.
This blog post talk about this, but is a little out of date. To get this functionality you need to include the
storm-clojure-test dependency. This will pull in a lot of storm itself that should not be packaged with your topology, sp please make sure it is a test dependency only,.
One of the great use cases for local mode is to be able to walk through the code execution of your bolts and spouts using an IDE. You can do this on the command line by adding the
--java-debug option followed by the parameter you would pass to jdwp. This makes it simple to launch the local cluster with
-agentlib:jdwp= turned on.
When running from within an IDE itself you can modify your code run run withing a call to
LocalCluster.withLocalModeOverride
public static void main(final String args[]) { LocalCluster.withLocalModeOverride(() -> originalMain(args), 10); }
Or you could also modify the IDE to run “org.apache.storm.LocalCluster” instead of your main class when launching, and pass in the name of the class as an argument to it. This will also trigger local mode, and is what
storm local does behind the scenes.
You can see a full list of configurations here.
These, like all other configs, can be set on the command line when launching your topology with the
-c flag. The flag is of the form
-c <conf_name>=<JSON_VALUE> so to enable debugging when launching your topology in local mode you could run
storm local topology.jar <MY_MAIN_CLASS> -c topology.debug=true
|
https://apache.googlesource.com/storm/+/357df98180fe368efa9e5195c4fa38670ae64a9f/docs/Local-mode.md
|
CC-MAIN-2020-50
|
refinedweb
| 575
| 55.74
|
Changing Directive Inputs Programmatically Won't Trigger ngOnChanges In AngularJS 2 Beta 9
In Angular 2, when you set directive input bindings using the "[value]" property syntax, the ngOnChanges life-cycle method will be called once when the input value is initialized and then once for each subsequent change. As I just discovered, however, the ngOnChanges life-cycle method is only invoked when the changes are driven by the template syntax. If, on the other hand, you change the directive input value programmatically, the ngOnChanges life-cycle method will not be invoked.
Run this demo in my JavaScript Demos project on GitHub.
To demonstrate this, all we have to do is set up a simple component that exposes an input. Then, we can try to change that input programmatically and check to see if the component's ngOnChanges life-cycle method is called. In this case, I'll use a simple Counter component that renders a "value" input.
In the following code, notice that I am using two different means to set the Counter's value input. First, I'm using the template syntax to set the initial value to zero. Then, I'm using setInterval() so update the Counter's value input programmatically. This way, we can see how the two different approaches affect the ngOnChanges life-cycle method.
- <!doctype html>
- <html>
- <head>
- <meta charset="utf-8" />
- <title>
- Changing Inputs Programmatically Won't Trigger ngOnChanges In AngularJS 2 Beta 9
- </title>
- </head>
- <body>
- <h1>
- Changing Inputs Programmatically Won't Trigger ngOnChanges In AngularJS.
- requirejs(
- [ /* Using require() for better readability. */ ],
- function run() {
- ng.platform.browser.bootstrap( require( "App" ) );
- }
- );
- // --------------------------------------------------------------------------- //
- // --------------------------------------------------------------------------- //
- // I control the root of the application.
- define(
- "App",
- function registerApp() {
- // Define the App component metadata.
- ng.core
- .Component({
- selector: "my-app",
- directives: [ require( "Counter" ) ],
- // Let's configure a live query for the Counter component so that
- // we can change the [value] programmatically.
- queries: {
- "counter": new ng.core.ViewChild( require( "Counter" ) )
- },
- // In this template, notice that we are binding a static value
- // to the [value] property using the template syntax. Then, we
- // are going to continue to update the value programmatically.
- template:
- `
- <counter [value]="0"></counter>
- `
- })
- .Class({
- constructor: AppController,
- // Define the life-cycle methods on the prototype so that they
- // are picked up an run-time.
- ngAfterViewInit: function noop() {}
- })
- ;
- return( AppController );
- // I control the App component.
- function AppController() {
- var vm = this;
- // I hold the value that will be piped into the Counter input.
- var counterValue = 0;
- // Expose the public methods.
- vm.ngAfterViewInit = ngAfterViewInit;
- // ---
- // PUBLIC METHODS.
- // ---
- // I get called after the view has been initialized and the live
- // queries have been bound.
- function ngAfterViewInit() {
- // Now that we have an injected reference to the Counter
- // component instance, lets set up an interval to start
- // incrementing the [value] property.
- // --
- // CAUTION: We can't set the initial value directly inside the
- // ngAfterViewInit() method or we'll run into a change-detection
- // error in which the View is changed as a side-effect of the
- // view-init event. As such, we have to wrap any change inside
- // some sort of timeout / interval.
- setInterval(
- function updateCounter() {
- vm.counter.value = ++counterValue;
- },
- 1000
- );
- }
- }
- }
- );
- // --------------------------------------------------------------------------- //
- // --------------------------------------------------------------------------- //
- // I provide a counter that outputs the bound value.
- define(
- "Counter",
- function registerCounter() {
- // Define the Counter component metadata and return the constructor.
- return ng.core
- .Component({
- selector: "counter",
- inputs: [ "value" ],
- template:
- `
- <strong>Current Count:</strong> {{ value }}
- `
- })
- .Class({
- // Leaving the constructor as a no-op since it doesn't have to
- // do anything.
- constructor: function noop() {},
- // I get called whenever the bound inputs change.
- ngOnChanges: function( event ) {
- // Here, we are simply going to output every input change
- // and determine whether or not it is the first change, or
- // some subsequent change.
- console.log(
- "ngOnChanges [first]:",
- event.value.isFirstChange(),
- "-",
- event.value.currentValue
- );
- }
- })
- ;
- }
- );
- </script>
- </body>
- </html>
Now, you might be thinking that the use of a static value in the template is affecting the ngOnChanges life-cycle. But, don't worry, it is not. If we were to remove the "[value]" binding altogether, we'd get the same result, only without an initial value. And, when we run the above code, we get the following page output:
As you can see, the ngOnChanges directive life-cycle method is invoked for the initial binding defined by the "[value]" property syntax. But, it is never called when we update the value programmatically within the setInterval() method.
While you're likely to use the property syntax most of the time when setting up an input binding, this is a really important detail to understand when you start creating custom inputs. Because, while a naked component might use something like a "[value]" binding, the ngModel's valueAccessor proxy will have to change the input programmatically. And, at that point, the underlying component's ngOnChanges life-cycle method may not work in the way you expected.
Looking For A New Job?
Ooops, there are no jobs. Post one now for only $29 and own this real estate!
Reader Comments
Have you tried using/changing actual property of the parent programmatically? I mean instead of this:
counter [value]="0"
do this
counter [value]="someVarUpdatedProgrammatically"
@Yakov,
If you do that, then Yes, the ngOnChanges life-cycle method will continue to work as expected because you are using the property binding template syntax. And, in 99% of the cases, this is likely what we are doing. But, in a small set of edge-cases, you might not be able to / have access to the template itself and have to do things programmatically.
I plan to follow up with a few more posts on the topic to discuss the edge-cases.
@All,
This discovery has got me thinking about the implied contract of an Input property (as opposed to a vanilla public property). Some thoughts on the matter:
It's not something that's truly outlined in the docs; but, this is just my reasoning on the evidence.
@All,
I took a stab at trying to figure out how to trigger the ngOnChanges() life-cycle event manually when you are updating "Input" properties programmatically (such as in an ngModel proxy):
It is completely non-trivial. I hope that I am waaaaay off here and that there is simpler solution.
So, I hit this issue as well. I was passing a new array into a component but that component's onChanges() lifecycle was not being triggered. However it wasn't the onChanges issue, it was that the setter for the input was not being called thus not triggering the onChanges for the component. It was unusual b/c the getter for the input was called and changes were reflected on the template by using {{inputValue}}, however the setter for the input was still not called.... This led me to some discoveries.
1) for every event involving the input (hover, click, ..), the getter of an input is called.. this does not mean a changeDetection cycle is preformed.
2) a changeDetection cycle only occurs in a component for events involving the component, so if you pass a value to some component it wont update until you mouseOver it or something.
3) (maybe same as 2) changeDetection does not go though the full component tree every cycle, only down to which component the event occurred in..
here, I created this simple component to help "see" when change detection occurs in a given component, really helps demystify CD. Just drop it into any component to see when an event triggers change detection in that component.
Solution... donno yet, if you find a good way to trigger CD programmaticly with ng2 let me know.
import { Component } from '@angular/core';
import template from './ChangeDetector.html';
@Component({
selector: 'change-detector',
template
})
export class ChangeDetector {
value : number = 0;
update() : number{
this.value += 1;
console.log("running change detection" + this.value);
return 0;
}
}
//template
<div hidden>
{{value}}
{{update()}}
</div>
Hello,
is there any progress? Any workaround? I am on 2.1.1 and I met with the same behavior. I tried to manually call detectChanges(), but it doesnt help.
@Jerrad,
if you reinit the array ngOnChange fires:
parent
setTimeout(() => { this.updateSomething(); }, 0);
}
updateSomething) {
this.someArray = [{name: 'somename'}]; // or []
}
child
ngOnChanges(changes: SimpleChanges) {
console.log('CHANGES ', changes);
}
@Ben,
onChange life cycle not working even if i change value of parent inside setInterval
|
https://www.bennadel.com/blog/3053-changing-directive-inputs-programmatically-won-t-trigger-ngonchanges-in-angularjs-2-beta-9.htm
|
CC-MAIN-2018-34
|
refinedweb
| 1,362
| 55.84
|
How to get the scintilla view0/view1 HWNDs
I’m trying to write an external app which will try to automate Notepad++ with the same SendMessage interface that plugins use. However, I am having trouble figuring out how to grab the HWND for the two views’ Scintilla editors.
I tried looking through the messages at, but (aside from having to use google’s cache, because of the recent website issues), I cannot find a message that seems to give me the HWND for the views.
I tried enumerating all the “scintilla” sub-windows of the main Notepad++ window… I found five listed; if I use the first one that has visibility turned on, I can get the HWND for (what seems to usually or maybe always be) the active scintilla. But how can I tell whether it’s for PRIMARY or SECONDARY, and how can I get the HWND for the other?
I also tried looking through the PythonScript source code, but I cannot figure out where the editor1/editor2 come from – I found that
importScintilla()sets them, and there’s a call to
importScintilla(mp_scintilla, mp_scintilla1, mp_scintilla2), but I’ve gotten confused as to the call path earlier than that, and cannot figure out where
mp_scintilla*come from.
Plugins have an exported
setInfo()function that will receive a
NppDatastruct. It is defined as:
struct NppData { HWND _nppHandle; HWND _scintillaMainHandle; HWND _scintillaSecondHandle; };
Specifically for PythonScript, see
You might get more ideas generated if you tell us more about this mysterious “external app”. ;)
@Alan-Kilborn said:
You might get more ideas generated if you tell us more about this mysterious “external app”. ;)
I’ve always wanted a PerlScript plugin… and, barring that (since I don’t have the VS development environment, and doubt I’ll get one set up at home in the foreseeable future), as an alternative, I wanted to try to make the Perl modules necessary to use SendMessage() to interface with Notepad++ externally, with some or all of the same functions as available to PythonScript.
The referenced thread got my mind working on this again. I made a major breakthrough recently in figuring out how to properly read back the LPARAM-based message-return values (though I’m still also struggling with the TCHAR** wparam values… but I can live without the few messages that use those, for now), so that encouraged me that it wasn’t a complete dead end, and I started working on it again.
As I said in the OP, I can enumerate the various Scintilla HWNDs using general Win32 methodology for finding child-windows with certain attributes – but there are more than two Scintilla HWNDs (presumably because other plugins have created extra scintilla instances for their consoles and similar; oh, and probably the search results is another scintilla instance)… but maybe there’s a way through shooting messages to those HWNDs to determine whether they are one of the two main Notepad++ views (and if so, which one)? Or a way to correlate the scintilla HWNDs to some SendMessage-accessible information in Notepad++ itself.
In the plugin list of the old PluginManager there was a plugin called ActiveX plugin. This plugin exposes an ActiveX programming interface to Notepad++. You can download it here. I don’t know if you have access from Perl to ActiveX objects and I don’t know if the plugin still works with recent versions of Notepad++ (it works at least with v7.5.6).
At plugin installation you will be asked to register the plugin’s ActiveX object. If you decline that, it only allows you to access the programming interface from within Notepad++ by using its menu entries
Execute custom scriptand
Execute current document. If you allow the registration, you can access the programming interface from every script language that supports ActiveX objects as well.
The plugin exposes an INppApplication object which represents Notepad++. This object has an array property editors of type INppEditor which represent the two Scintilla views. These objects have the property hWnd which should be what you are looking for.
Moreover, the plugin provides an abstraction for the most important operations on documents and their management. This will save you some coding effort.
Example code in VBScript:
Set objNppApplication = CreateObject("NotepadPlusPlus.Application") intScWnd0 = objNppApplication.editors(0).hWnd
Good luck!
if call it from outside, meaning within process but not as a plugin you could enumerate the child windows like this
import ctypes, ctypes.wintypes EnumWindowsProc = ctypes.WINFUNCTYPE(ctypes.c_bool, ctypes.wintypes.HWND, ctypes.wintypes.LPARAM) user32 = ctypes.WinDLL('user32', use_last_error=True) scintilla_hwnd = {0:None, 1:None} def find_scintilla_windows(npp_handle): def foreach_window(hwnd, lParam): curr_class = ctypes.create_unicode_buffer(256) user32.GetClassNameW(hwnd, curr_class, 256) if curr_class.value == 'Scintilla': if user32.GetParent(hwnd) == npp_handle: if scintilla_hwnd[0] is None: scintilla_hwnd[0] = hwnd elif scintilla_hwnd[1] is None: scintilla_hwnd[1] = hwnd return False return True return not user32.EnumChildWindows(npp_handle, EnumWindowsProc(foreach_window), None) find_scintilla_windows(user32.FindWindowW(u'Notepad++', None)) print scintilla_hwnd
I assume that enumerating child windows follow the order of how those where created previously therefore it should be reliable.
@Ekopalypse said:
I assume that enumerating child windows follow the order of how those where created previously therefore it should be reliable.
Thanks. Your enumeration code looks very similar to what the Perl library I’m using does the enumeration under-the-hood. I guess I’ll run some experiments to see if the order is consistently the same, and consistently gets me access to the correct Scintillas.
@dinkumoil suggested,
This plugin exposes an ActiveX programming interface to Notepad++.
Interesting idea. I’ve never tried to access ActiveX from Perl. If I find inconsistencies in the enumerated-scintilla order, I may have to look into that extra layer.
I’ll report back when I’ve confirmed/contradicted the enumeration order, and on any other progress or ideas I have on the HWND side of things. There also may be other topics created as I run across other issues.
Thanks for everybody’s input so far. Much appreciated.
Looks good; however, I would put a
uin front of
'Scintilla'.
:)
- Michael Vincent last edited by
I’ve always wanted a PerlScript plugin
Yeah, me too …
There’s this: NPPRunPerl, “A Notepad++ plugin to comfortably create and run a perl script to transform the current file or selection”.
I tried it in the past and it wasn’t quite as robust or reliable as I’d hoped.
Cheers.
So it would be interesting to keep reading about your marriage-of-npp-and-perl saga here in this thread. Do keep us posted.
I presume that once you have the
scintilla_hwnds, you’ll proceed by creating a wrapper function for each of the Scintilla functions, like the Pythonscript
Editorclass provides in that language for the instantiated objects
editor1and
editor2.
Maybe something like this Python, but translated into Perl:
sci_GetDirectPointer = 2185 sci_GetMargins = 2253 scintilla_direct_pointer = [ SendMessage(scintilla_hwnd[0], sci_GetDirectPointer, 0, 0), SendMessage(scintilla_hwnd[1], sci_GetDirectPointer, 0, 0) ] scintilla_direct_function = ctypes.WinDLL('SciLexer.dll', use_last_error=True).Scintilla_DirectFunction def editor_GetMargins(view): return scintilla_direct_function(scintilla_direct_pointer[view], sci_GetMargins, 0, 0)
which leads into usage:
print('view 0 number_of_margins:', editor_GetMargins(0))
(I use the get-margins example function here because I have code like the above in use, because the Pythonscript API doesn’t have anything for it, or I can’t find it.)
Anyway, definitely looking forward to reading more about the Perl thing here!
- PeterJones last edited by PeterJones
@Alan-Kilborn said:
you’ll proceed by creating a wrapper function for each of the Scintilla functions, like the Pythonscript Editor class
Yes, my goal would be to get as many of the Messages (Notepad++ or Scintilla) as I can. Though full implementation (or even matching all of PythonScript’s available messages) will take a while.
once you have the scintilla_hwnds
I’ve thought of a fallback, though it’s not elegant. I might be able to launch my ExternalPerlScript automator from Notepad++ (maybe through NppExec, or irony of ironies, through PythonScript), and pass along the scintilla HWNDs as arguments.
Or, if I don’t need a dockable component, maybe @Michael-Vincent could get me to the point he mentioned here, with a gcc-compiled plugin, as long as it doesn’t need a dockable component. Because really, if I have a plugin that just responds to a couple of queries, I might be able to send NPPM_MSGTOPLUGIN from the external perl process to talk with the internal simple-plug.
Hmm, how about it, @Michael-Vincent? Do you still have a super-simple plugin that just uses gcc/g++ and gmake/dmake to get an about-box? If I could start with that, I’d have a new direction to explore (and might eventually figure out how to get more fancy, and learn more about the plugins, even if I never go dockable…)
- Michael Vincent last edited by
When I started the simple plugins I “wrote” (read: liberally borrowed others’ code and put into the provided N++ plugin template), I started with gcc / gmake (provided from Strawberry Perl). As that other thread indicates, once I started to add dockable components, it wouldn’t work. There were also some other issues in that I wasn’t statically linking the libc so if I compiled for 32-bit but then had my 64-bit Perl in my path, my plugin wouldn’t load in 32-bit N++.
Anyway, to see examples of some simple plugin with a Makefile for gcc:
@PeterJones said:
I’ll run some experiments to see if the order is consistently the same
So far, so good.
At first, I was seeing the Find Results scintilla window (and PythonScript scintilla window) show up in my enumeration before the first, but when I’d run @Ekopalypse’s python-based enumeration, it always only listed the four, and the first two always matched the two editors. I eventually determined that the Find Results and Python Script have an intermediate generation between the NPP hwnd and the scintilla hwnd; Eko’s python enumeration correctly checked whether their immediate parent was NPP, and ignored it if not; once I added that to my enumeration, it seems to keep the two editors as the first two.
I’ll probably make that assumption for now, and move forward.
Again, thank you all.
Looks good; however, I would put a u in front of ‘Scintilla’.
Ahh, python2 :-D can handle this :-D
@PeterJones
I don’t have access to my windows pc at the moment, but I played a little bit trying to embed perl in a c++ program. Seems to be pretty straightforward.
On linux afer installing libperl-dev this compiled.
#include <EXTERN.h> #include <perl.h> static PerlInterpreter *my_perl; void run(char ** filename) { my_perl = perl_alloc(); perl_construct(my_perl); perl_parse(my_perl, NULL, 0, filename, (char **)NULL); perl_run(my_perl); perl_destruct(my_perl); perl_free(my_perl); } int main(int argc, char **argv, char **env) { char* filename[] = {"", "/home/eko/Documents/c_c++/test.pl"}; run(filename); }
Next week when I’m at home again, I could try to see if this can be embedded into a win dll if no one else found/reported how to do it in the meantime.
UPDATE: repo for the perl module is now publicly available at …
So far:
- I can get the HWND for the Notepad++ GUI as well as the first two child-Scintilla windows, which are assumed to be editor1 and editor2
- I can send generic messages to the Notepad++ object. There are wrappers for grabbing an integer or a single string from the LPARAM. Still need to work on other LPARAM and/or WPARAM return-data-types
- Still need to write all the Perl wrappers for Notepad++ and Scintilla messages (ie, the bulk of it)
Once I got it to the point that I had the editor HWNDs, and was able to convert the NPP and Scintilla messages from the
.hfiles to Perl sub-modules (to give Perl easy access to a hash of messages for each window type), I decided it was sufficient to move from my private subversion repo to a public GitHub repo.
It shows the general structure, and the whole look-and-feel of my coding, though there’s not much there.
- Ekopalypse last edited by Ekopalypse
@PeterJones @michael-vincent
I need some advice.
I cloned the perl github repo and built, tested and installed perl - so far so good, got some interpreter which seems to do what it should.
Now, when trying to build an embedded perl with the same, except for the path of the test script, c++ code as posted before it crashes when doing perl_parse.
It looks like the issue is that it is looking for a path or perl module called MSWin32-x64-multi-thread like
…EmbeddedPerl\x64\Debug\lib\MSWin32-x64-multi-thread.
Searching the web seems to indicate that MSWin32-x64-multi-thread is only the constructed architecture name used to build unique
interpreters but not that there is something created in .\lib directory called MSWin32-x64-multi-thread. Did you faced some similar issue
when working with perl? Do you have some good mailing lists, forums etc… where I could ask for clarification about this?
Thank you
I cloned the perl github repo and built, tested and installed perl
On Windows? Wow, that’s impressive. I always use Strawberry Perl. (But from what little I know of embedding Perl in other apps/libraries/etc, I think you need to use the same compiler – or at least compatible compiler options – so it might be a necessary evil.)
Do you have some good mailing lists, forums etc… where I could ask for clarification about this?
Michael and I are both on perlmonks.org (he’s vinsworldcom, I’m pryrt), and that’s where I’d recommend asking.
that’s where I’d recommend asking.
Thanks, and that’s what I did :-)
|
https://community.notepad-plus-plus.org/topic/17992/how-to-get-the-scintilla-view0-view1-hwnds/4?lang=en-US
|
CC-MAIN-2022-40
|
refinedweb
| 2,301
| 50.16
|
There are two ways to set up an Angular 2 manual steps.
There are also two ways to set up the way you want to develop your app with ASP.NET Core. One way is to separate the client app completely from the server part. It is pretty useful to decouple the server and the client to create almost independent applications and to host on different machines. The other way is to host the client app inside the server app. This is useful for small applications, having everything in one place, and it is easy to deploy on a single server.
In this post, I'm going to show you, how you can set up an Angular 2 app, which will be hosted inside an ASP.NET Core application using Visual Studio 2015. The Angular-CLI is not the right choice here because it already sets up a development environment for you and all that stuff is configured a little bit differently. The effort to move this to Visual Studio would be too 2 2. If the Browser calls a URL that doesn't exist on the server, it could be": "dotnetpro-ecollector", "version": "1.0.0", "scripts": { "start": "tsc && concurrently \"npm run tsc:w\" \"npm run lite\" ", "lite": "lite-server", "postinstall": "typings install && gulp restore", "tsc": "tsc", "tsc:w": "tsc -w", "typings": "typings" }, ", "es6-promise": "^3.1.2", "es6-shim": "^0.35.0", "jquery": "^2.2.4", "bootstrap": "^3.3.6" }, "devDependencies": { "gulp": "^3.9.1", "concurrently": "^2.0.0", "lite-server": "^2.2.0", "typescript": "^1.8.10", "typings": "^1.0.4" } }
In this listing, I changed a few things:
- I added "&& gulp restore" to the postinstall script.
- I also added Gulp to the devDependency to typings.
After the file is saved, Visual Studio tries to load all the packages. This works, but VS shows a yellow exclamation point for errors. Until recently, I didn't figure out what was going wrong here. To be sure all packages are propery installed, use the console, change directory to the current project, and type
npm install
The post install will possibly fail'); gulp.task('default', function () { // place code for your default task here }); gulp.task('restore', function() { gulp.src([ 'node_modules/@angular/**/*.js', 'node_modules/angular2-in-memory-web-api/*.js', 'node_modules/rxjs/**/*.js', 'node_modules/systemjs/dist/*.js', 'node_modules/zone.js/dist/*.js', 'node_modules/core-js/client/*.js', 'node_modules/reflect-metadata/reflect.js', 'node_modules/jquery/dist/*.js', 'node_modules/bootstrap/dist/**/*.*' ]).pipe(gulp.dest('./wwwroot/libs')); });.config
{ "globalDependencies": { "es6-shim": "registry:dt/es6-shim#0.31.2+20160317120654", "jquery": "registry:dt/jquery#1.10.0+20160417213236" } }
Now we have to configure TypeScript itself. We can also add a new item, using Visual Studio to create a TypeScript configuration file. I would suggest not to use the default content, but rather were to add the "compileOnSave" flag and to exclude the "node_modules" folder from the TypeScript build, because we don't need to build containing the TypeScript files and because we moved the needed JavaScript to
./wwwroot/libs.
If you use Git or any other source code repository, you should ignore the files generated out of our TypeScript files. In the case of Git, I simply add another .gitignore to the
./wwwroot/app folder.
#remove generated files *.js *.map
We do this because the JavaScript files are only relevant to run the application and should be created automatically in the development environment or on a build server, before deploying the app.
The First App
That is all we need to prepare an ASP.NET Core project in Visual Studio 2015. Let's start to create the Angular app. The first step is to create an index.html in the folder
wwwroot:
<html> <head> <title>dotnetpro eCollector</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href="css/styles.css"> <!-- 1. Load libraries --> <!-- Polyfill(s) for older browsers --> <script src="libs/shim.min.js"></script> <script src="libs/zone.js"></script> <script src="libs/Reflect.js"></script> <script src="libs, and so on. Create a new JavaScript file, call it systemjs.config.js, and paste the following content into it:
(function (global) { // map tells the System loader where to look for things var map = { 'app': 'app', 'rxjs': 'lib/rxjs', '@angular': 'lib//upgrade' ];);
This file also defines a main entry point which is a main.js. This file is the transpiled TypeScript file main.ts we need to create in the next step. The main.ts bootstraps our Angular 2 app:
import { bootstrap } from '@angular/platform-browser-dynamic'; import { AppComponent } from './app.component'; bootstrap(AppComponent);
We also need to create our first Angular2 component. Create a TypeScript file with the name app.component.ts inside the app folder:
import { Component } from '@angular/core'; @Component({ selector: 'my-app', template: '<h1>My first Angular App in Visual Studio< 2 magic happened, you'll see the contents of the template defined in the app.component.ts.
Conclusion
I propose to use Visual Studio and Gulp, and you need to use a console in this case, but web development will be a lot faster and a lot more lightweight with this approach. In one of the next posts, I'll show how I currently work with Angular 2.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/setup-angular2-amp-typescript-in-a-aspnet-core-pro
|
CC-MAIN-2017-47
|
refinedweb
| 889
| 58.28
|
thxthx
public class Main { public static void main(String[] args) { final Counter counter = new Counter(); Thread a = new Thread(new Runnable(){ public void run() { counter.increment(); } }); Thread b = new Thread(new Runnable(){ public void run() { System.out.println("b of result:"+counter.value()); } }); a.start(); b.start(); } } class Counter { private int c = 0; public synchronized void increment() { c++; } public synchronized void decrement() { c--; } public synchronized int value() { return c; } }
JAVA-rookie wrote:The operating system starts thread A, then a context switch happens, putting A into a paused state. The OS starts thread B and gets as far as the println() statement before the next context switch back to Thread A.
hi guys:
i read the java tutorial about Memory Consistency Errors. it mention that synchronization can create happens-before relationship to avoid memory consistency errors.
and i run my own code. but the errors still there. the Thread b sometime will print out "0", if u run the program many times. Could any one explain this for me.
The way you're usiing synchronized has little effect because the synchronization only lasts during the method call, not between them.The way you're usiing synchronized has little effect because the synchronization only lasts during the method call, not between them.
Object monitor = new Object(); boolean canGoOn = false; private void pause() throws InterruptedException { synchronized(monitor) { while(!canGoOn) monitor.wait(); } } private void release() { sychronized(monitor) { canGoOn = true; monitor.notifyAll(); } }
|
https://community.oracle.com/thread/2074178?tstart=57780
|
CC-MAIN-2016-44
|
refinedweb
| 238
| 56.86
|
Infographics Maker Templates 3.3 Crack Mac Osx
“The Bat! Message Recovery is a Windows utility that helps you retrieve the messages from The Bat! client. The application supports all editions of The Bat!. It works by scanning and finding the e-mail messages in The Bat! database, and then extracts them into the MSG format. You can then open the saved messages with Microsoft Outlook and other similar applications.”/*
* Copyright (c) 2018 THL A29 Limited, a Tencent TencentCloud.Vpc.V20170312.Models
{
using Newtonsoft.Json;
using System.Collections.Generic;
using TencentCloud.Common;
public class DnsName : AbstractModel
{
///
/// 解析ip或者host后的顺序,默认0,更改为按数字顺序。
///
[JsonProperty(“SortOrder”)]
public ulong? SortOrder{ get; set; }
///
/// 查询记录对应的ip和/或/host,多个host分别用/分 eea19f52d2
This project can also be used as an alarm clock. It shows a lamp on the screen while the calculations are being carried out, and it will be deactivated when the calculations have finished.
Open Source Software
PayCalc is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version.
PayCal LICENSES for details.
Help us to spread the word about PayCalc by visiting
Please read the copyright notice at the top of the source code to learn more.
On the right side, you will find links to other useful open source software.
Instructions to install the PayCalc GUI:
1. Download, unzip and install the PayCalc GUI.
2. To use the GUI, start the PayCalc program. After it is run, it will wait for your input in the top window.
3. In the upper right side of the window, you will see a menu (File). On the menu, you can choose (Edit File). In the next window, you can edit the necessary system variables. Here, you can change the name and location of the wallet file.
4. To start the PayCalc program, press the (Start) button.
5. To close the program, press the (Stop) button.
6. To exit the program, press the (Quit) button.
Screenshots of PayCalc:
|
https://kwan-amulet.com/archives/1876131
|
CC-MAIN-2022-27
|
refinedweb
| 347
| 68.77
|
table of contents
- buster 4.16-2
- buster-backports 5.02-1~bpo10+1
- testing 5.03-1
- unstable 5.03-1
NAME¶arch_prctl - set architecture-specific thread state
SYNOPSIS¶
#include <asm/prctl.h> #include <sys/prctl.h>
int arch_prctl(int code, unsigned long addr); int arch_prctl(int code, unsigned long *addr);
DESCRIPTION¶arch_prctl() sets architecture-specific process or thread state. code selects a subfunction and passes argument addr to it; addr is interpreted¶On success, arch_prctl() returns 0; on error, -1 is returned, and errno is set to indicate the error.
ERRORS¶
- EFAULT
- addr points to an unmapped address or is outside the process address space.
- EINVAL
- code is not a valid subcommand.
- EPERM
- addr is outside the process address space.
CONFORMING TO¶arch_prctl() is a Linux/x86-64 extension and should not be used in programs intended to be portable.
NOTES¶arch_prctl() is supported only on Linux/x86-64 for 64-bit programs currently. overwrite¶mmap(2), modify_ldt(2), prctl(2), set_thread_area(2)
AMD X86-64 Programmer's manual
|
https://manpages.debian.org/buster-backports/manpages-dev/arch_prctl.2.en.html
|
CC-MAIN-2019-47
|
refinedweb
| 171
| 53.27
|
:
12 thoughts on “Polymorphism”
Isn’t there a way, in Python, to make a class “uninstantiable” like Java’s abstract keyword or something similar?
Yes, it’s possible to disable instantiation but it’s kinda unpythonic. That is to say, there is no abstract in Python.
There is a module in Python called “abc”, which allows us to create Abstract classes, this will make class uninstantiable. See below example.
import abc
class Test(object):
__metaclass__ = abc.ABCMeta
.
.
Hello, why using Polymorphism?
Just to organize the code? If yes, Ok.
Because we could write the code above as:
Hi Gilberto, thanks for your comment! You are right it can be written that way. Polymorphism has many other uses than organizing the code.
A few practical examples are:
Let’s take the example of a method with different objects. We can then define a single function that accepts those types such that we only need to define one function instead of duplicating it:
Then:
Output:
I hope you enjoy the site, feel free to ask anything, more tutorials coming soon 🙂
I don’t think so:
This way shows the same output
Hey hey!
The difference is only the internal structure of the program. Polymorphism adds a layer of abstraction, it forces the methods of the classes/objects to exist.
This is the first example of Polymorphism I have seen, since I started learning Python OOP. Thanks
what is significance of raise NotImplementedError in above example?
i tried by putting null value to car[] or if i take method with null then i get syntax error but i am not getting above error use
It’s significant and should not be reachable. The idea is to use the Car class as abstraction, thus not creating objects from Car class. The car class should be seen as abstract class or interface, we define the methods but leave out all implementation.
Let’s take a natural language example: if we create a car, it should have the ability to drive, start engine, accelerate and so on. How a car does this depends entirely on the type.
By using polymorphism we can create new objects with various implementations, but with the same methods.
So in this example, def makeSound is the definition of a regular function, not attached to any class?
Yes, it takes as input the object that has the same structure. It takes any object as long as it has those functions. In the example the object given as parameter to the function makeSound(), must have the function sound(). It will execute that function regardless of the actual implementation and accepts any object of the same form (structure).
There is another form of polymorphism with an abstract class. I have extended this tutorial with more examples for you. If you have any further questions feel free to ask.
|
https://pythonspot.com/en/polymorphism/
|
CC-MAIN-2017-09
|
refinedweb
| 474
| 65.22
|
What code was responsible for generating a view
Project description
more.whytool: find out what code was responsible for generating a response
more.whytool lets you create a tool that tells you what view code was responsible for handling a request.
To create such a tool you do the following, for instance in the main.py of your project:
from more.whytool import why_tool from .someplace import SomeApp def my_why_tool(): SomeApp.commit() why_tool(SomeApp)
where SomeApp is the application you want to query, typically the root application of your project.
Now you need to hook it up in setup.py to you can have the tool available:
entry_points={ 'console_scripts': [ 'morewhytool = myproject.main:my_why_tool', ] },
After you install your project, you should now have a morewhytool tool available. You can give it requests:
$ morewhytool /some/path
It tells you:
- What path directive handled the request.
- What view directive handled the request.
CHANGES
0.5 (2017-01-13)
- Initial public release.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/more.whytool/
|
CC-MAIN-2019-30
|
refinedweb
| 185
| 69.07
|
Using the Model Class
- PDF for offline use
-
- Sample Code:
-
- Related APIs:
-
Let us know how you feel about this
0/250
The Model class greatly simplifies rendering complex 3D objects when compared to the traditional method of rendering 3D graphics. Model objects are created from content files, allowing for easy integration of content with no custom code.
Overview
The MonoGame API includes a
Model class which can be used to store data loaded from a content file and to perform rendering. Model files may be very simple (such as a solid colored triangle) or may include information for complex rendering, including texturing and lighting.
This walkthrough uses a 3D model of a robot and covers the following:
- Starting a new game project
- Creating XNBs for the model and its texture
- Including the XNBs in the game project
- Drawing a 3D Model
- Drawing multiple Models
When finished, our project will appear as follows:
Creating an Empty Game Project
We’ll need to set up a game project first called MonoGame3D. For information on creating a new MonoGame project, see this walkthrough on creating a Cross Platform Monogame Project.
Before moving on we should verify that the project opens and deploys correctly. Once deployed we should see an empty blue screen:
Including the XNBs in the Game Project
The .xnb file format is a standard extension for built content (content which has been created by the MonoGame Pipeline Tool). All built content has a source file (which is an .fbx file in the case of our model) and a destination file (an .xnb file). The .fbx format is a common 3D model format which can be created by applications such as Maya and Blender.
The
Model class can be constructed by loading an .xnb file from a disk that contains 3D geometry data. This .xnb file is created through a content project. Monogame templates automatically include a content project (with the extension .mgcp) in our Content folder. For a detailed discussion on the MonoGame Pipeline tool, see the Content Pipeline guide.
For this guide we'll skip over using the MonoGame Pipeline tool and will use the .XNB files included here. Note that the .XNB files differ per platform, so be sure to use the correct set of XNB files for whichever platform you are working with.
We'll unzip the Content.zip file so that we can use the contained .xnb files in our game. If working on an Android project, right-click on the Assets folder in the WalkingGame.Android project. If working on an iOS project, right-click on the WalkingGame.iOS project. Select Add->Add Files... and select both .xnb files in the folder for the platform you are working on.
The two files should be part of our project now:
Xamarin Studio may not automatically set the build action for newly-added XNBs. For iOS, right-click on each of the files and select Build Action->BundleResource. For Android, right-click on each of the files and select Build Action->AndroidAsset.
Rendering a 3D Model
The last step necessary to see the model on-screen is to add the loading and drawing code. Specifically, we’ll be doing the following:
- Defining a
Modelinstance in our
Game1class
Modelinstance in
Game1.LoadContent
- Drawing the
Modelinstance in
Game1.Draw
Replace the
Game1.cs code file (which is located in the WalkingGame PCL) with the following:
public class Game1 : Game { GraphicsDeviceManager graphics; // This is the model instance that we'll load // our XNB into: Model model; public Game1() { graphics = new GraphicsDeviceManager(this); graphics.IsFullScreen = true; Content.RootDirectory = "Content"; } protected override void LoadContent() { // Notice that loading a model is very similar // to loading any other XNB (like a Texture2D). // The only difference is the generic type. model = Content.Load<Model> ("robot"); } protected override void Update(GameTime gameTime) { base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // A model is composed of "Meshes" which are // parts of the model which can be positioned // independently, which can use different textures, // and which can have different rendering states // such as lighting applied. foreach (var mesh in model.Meshes) { // "Effect" refers to a shader. Each mesh may // have multiple shaders applied to it for more // advanced visuals. foreach (BasicEffect effect in mesh.Effects) { // We could set up custom lights, but this // is the quickest way to get somethign on screen: effect.EnableDefaultLighting (); // This makes lighting look more realistic on // round surfaces, but at a slight performance cost: effect.PreferPerPixelLighting = true; // The world matrix can be used to position, rotate // or resize (scale) the model. Identity means that // the model is unrotated, drawn at the origin, and // its size is unchanged from the loaded content file. effect.World = Matrix.Identity; // Move the camera 8 units away from the origin: var cameraPosition = new Vector3 (0, 8, 0); // Tell the camera to look at the origin: var cameraLookAtVector = Vector3.Zero; // Tell the camera that positive Z is up var cameraUpVector = Vector3.UnitZ; effect.View = Matrix.CreateLookAt ( cameraPosition, cameraLookAtVector, cameraUpVector); // We want the aspect ratio of our display to match // the entire screen's aspect ratio: float aspectRatio = graphics.PreferredBackBufferWidth / (float)graphics.PreferredBackBufferHeight; // Field of view measures how wide of a view our camera has. // Increasing this value means it has a wider view, making everything // on screen smaller. This is conceptually the same as "zooming out". // It also float fieldOfView = Microsoft.Xna.Framework.MathHelper.PiOver4; // Anything closer than this will not be drawn (will be clipped) float nearClipPlane = 1; // Anything further than this will not be drawn (will be clipped) float farClipPlane = 200; effect.Projection = Matrix.CreatePerspectiveFieldOfView( fieldOfView, aspectRatio, nearClipPlane, farClipPlane); } // Now that we've assigned our properties on the effects we can // draw the entire mesh mesh.Draw (); } base.Draw(gameTime); } }
If we run this code we’ll see the model on-screen:
Let’s look at some of the more important parts of the code above.
Model class
The
Model class is the core class for performing 3D rendering from content files (such as .fbx files). It contains all of the information necessary for rendering, including the 3D geometry, the texture references, and
BasicEffect instances which control positioning, lighting, and camera values.
The
Model class itself does not directly have variables for positioning since a single model instance can be rendered in multiple locations, as we’ll show later in this guide.
Each
Model is composed of one or more
ModelMesh instances, which are exposed through the
Meshes property. Although we may consider a
Model as a single game object (such as a robot or a car), each
ModelMesh can be drawn with different
BasicEffect values. For example, individual mesh parts may represent the legs of a robot or the wheels on a car, and we may assign the
BasicEffect values to make the wheels spin or the legs move.
BasicEffect Class
The
BasicEffect class provides properties for controlling rendering options. The first modification we make to the
BasicEffect is to call the
EnableDefaultLighting method. As the name implies, this enables default lighting, which is very handy for verifying that a
Model appears in-game as expected. If we comment out the
EnableDefaultLighting call, then we’ll see the model rendered with just its texture, but with no shading or specular glow:
//effect.EnableDefaultLighting ();
The
World property can be used to adjust the position, rotation, and scale of the model. The code above uses the
Matrix.Identity value, which means that the
Model will render in-game exactly as specified in the .fbx file. We’ll be covering matrices and 3D coordinates in more detail in part 3, but as an example we can change the position of the
Model by changing the
World property as follows:
// Z is up, so changing Z to 3 moves the object up 3 units: var modelPosition = new Vector3 (0, 0, 3); effect.World = Matrix.CreateTranslation (modelPosition);
This code results in the object being moved up by 3 world units:
The final two properties assigned on the
BasicEffect are
View and
Projection. We’ll be covering 3D cameras in part 3, but as an example, we can modify the position of the camera by changing the local
cameraPosition variable:
// The 8 has been changed to a 30 to move the Camera further back var cameraPosition = new Vector3 (0, 30, 0);
We can see the camera has moved further back, resulting in the
Model appearing smaller due to perspective:
Rendering Multiple Models
As mentioned above, a single
Model can be drawn multiple times. To make this easier we will be moving the
Model drawing code into its own method that takes the desired
Model position as a parameter. Once finished, our
Draw and
DrawModel methods will look like:
protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); DrawModel (new Vector3 (-4, 0, 0)); DrawModel (new Vector3 ( 0, 0, 0)); DrawModel (new Vector3 ( 4, 0, 0)); DrawModel (new Vector3 (-4, 0, 3)); DrawModel (new Vector3 ( 0, 0, 3)); DrawModel (new Vector3 ( 4, 0, 3)); base.Draw(gameTime); } void DrawModel(Vector3 modelPosition) { foreach (var mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.EnableDefaultLighting (); effect.PreferPerPixelLighting = true; effect.World = Matrix.CreateTranslation (modelPosition); var cameraPosition = new Vector3 (0, 10, 0); var cameraLookAtVector = Vector3.Zero; var cameraUpVector = Vector3.UnitZ; effect.View = Matrix.CreateLookAt ( cameraPosition, cameraLookAtVector, cameraUpVector); float aspectRatio = graphics.PreferredBackBufferWidth / (float)graphics.PreferredBackBufferHeight; float fieldOfView = Microsoft.Xna.Framework.MathHelper.PiOver4; float nearClipPlane = 1; float farClipPlane = 200; effect.Projection = Matrix.CreatePerspectiveFieldOfView( fieldOfView, aspectRatio, nearClipPlane, farClipPlane); } // Now that we've assigned our properties on the effects we can // draw the entire mesh mesh.Draw (); } }
This results in the robot Model being drawn six times:
Summary
This walkthrough introduced MonoGame’s
Model class. It covers converting an .fbx file into an .xnb, which can in-turn be loaded into a
Model class. It also shows how modifications to a
BasicEffect instance can impact
Model drawing..
|
https://developer.xamarin.com/guides/cross-platform/game_development/monogame/3d/part1/
|
CC-MAIN-2017-13
|
refinedweb
| 1,644
| 54.93
|
Event Logging and Distributed Logging in ASP.NET (4/6)
Event Logging in .NET
Once
EventCollection has been implemented it is time to implement the
LocalEventLog that will use the
EventCollection class to buffer the local events. This is also a fairly simple class as shown below. This class is implemented as a singleton with a property to access the singleton instance. It contains a hashtable of collections of events. The individual event collections are created as required. Almost all the work gets done in the
LogEvent method. Here we check first to see if we need to create a new
EventCollection. If we do, then an instance of
EventCollection is created and added to the hashtable. Then the event is added to the collection. Of course, we check to see whether the buffer is full before adding to the set. If it is full the remote method is called to flush the set. The class also has an initialization method for setting up the remoting infrastructure.
public if(!_events.ContainsKey(s)) { EventCollection eSet = new EventCollection(s); _events.Add(s,eSet); } EventCollection currentSet = (EventCollection) _events[s]; // add the event to the set // if the set is full flush this set to the // global sink lock(currentSet) { if(currentSet.IsFull) { // Flush the events foreach(Event ee in currentSet) { _GlobalLog.WriteLog(s, ee); } currentSet.Clear(); } // add the event current.
public class GlobalEventLog : MarshalByRefObject { public void WriteLog(EventSource s, Event e) { System.Console.WriteLine("Source: " + s.MachineName + ":" + e.ToString()); } }
Created: January 16, 2003
Revised: January 16, 2003
URL:
|
http://www.webreference.com/programming/asp/logging/4.html
|
CC-MAIN-2017-13
|
refinedweb
| 255
| 60.31
|
Not too long ago GitHub introduced their combined status API. Since that time it has become an official component of the API. Today I would like to highlight the power of this approach and the integration possibilities this enables across GitHub’s diverse ecosystem of Satellite Applications.
But first what is a Satellite Application?
The ReviewNinja team at SAP uses the term “satellite application” to refer to any application built on top of the GitHub API, this also includes using the provided oauth mechanism for user authentication. These Satellite Applications enrich the ecosystem around GitHub and provide a strong incentive for using GitHub over competitor offerings. In essence these tools are third party apps that integrate with GitHub, but we think “satellite app” sounds much cooler.
Traditionally, inter-satellite app integrations have been downplayed in favour of integrations directly into GitHub. These statuses are usually surfaced on a pull request level, but can exist on any ref. With the combined status API, any third party tool can create a status on any ref with a given context. Any tool can write into any context, but most tools choose to use a logical namespace, for example we at ReviewNinja use the context “code-review/reviewninja”. Think of these contexts as buckets, for any ref we can throw in as many status updates as we like and watch our ref progress from pending to successful. The combination occurs by taking the latest status for each context (bucket) and reporting the combined status, the aggregation of all states (if one context is failing the combined status is failing, if all are passing the combined status is passing).
Currently, the combined status is not surfaced in GitHub’s UI, surely this will be available soon. Here is how we display this information in ReviewNinja.
As you can see we have two unique contexts reporting into Pull Request #565, but this could expand to any number of integrations. This low-coupled approach allows us to easily and seamlessly integrate with a plethora of Satellite Applications for free! Fantastic!
We are already exploring advanced workflow scenarios for ReviewNinja, for example maybe some contexts do not make sense from a code review perspective and should therefore be ignored. Perhaps certain contexts have more authority than others and should be given higher priority. The combined status API gives us the foundation needed to explore all these possibilities and create the best possible code review tool. This is our goal at ReviewNinja
What is next?
The integrations described here constitute a very light-weight approach. Each context contains a description and can be in one of four states (pending, success, error, or failure). However, no additional information can be provided to enrich the status.
Imagine if a linter could report to the status API a list of all errors and warnings? ReviewNinja could take the integration one step further and report the exact error on the exact line, the same scenario would be possible with testing suites. Such a rich and visually appealing approach to code review simply does not exist right now. But it sure would be fantastic!
These are the ideas and concerns the ReviewNinja team tackle every day. We are constantly looking for feedback and advice to achieve our goal of providing the best code review experience on GitHub. Please feel free to reach out to us by creating an issue or contributing to our feedback request. And don’t forget you can start using ReviewNinja today!
|
https://blogs.sap.com/2014/12/02/third-party-app-integrations-with-github/
|
CC-MAIN-2017-43
|
refinedweb
| 581
| 53.31
|
Part 1: simplr-forms — declarative forms for React. Why are we doing this?
This is series of posts documenting the development of @simplr/react-forms library.
Originally posted on April 8th.
- Part 1: Why are we doing this?
- Part 2: Core, validation, testing.
- Part 3: First e2e flow, FormStore.ToObject()
- Part 4: Normalizers and Modifiers
- Part 4.X: Status report
Why do we need yet another forms library?
Over the past few years we worked with React, we tried MANY form libraries that are out there.
Problems with existing libraries
- There is no de-facto standard library for forms in React, like there is for routing (react-router).
- They all have their limitations and usually big ones.
- Performance is usually bottlenecking as the project grows.
- No really flux friendly.
- Not friendly with TypeScript.
- There is not a single library that is built in the React way.
Ok, so the big one is the libraries can never be consumed in a React way, which raises a whole bunch of other problems:
- You have to learn everything from the ground up to use the library.
- When you write your app in a declarative react way, you have to fallback to imperative or semi-imperative way for forms.
- Library has to reinvent the wheel to do things.
- Lifecycle management becomes hell on earth.
- Race conditions appear out of the blue.
I could go on here, but Michael Jackson said it better here and the problems are more than apparent.
So why do we start now?
Well, technically, we started almost a year ago (Jan 12, 2016) by building a
simplr-forms v0.0.4 and used it internally.
v1.0.0 was also released internally on May 30, 2016.
v2.0.0 as well, released internally on September 7th, 2016.
We introduced v3.0.0-beta in Dec 7, 2016. And we made all possible mistakes during these 3 versions (hopefully).
We had 113 different alphas, betas, RCs and stable ones in between.
And more than a year later, we still don’t see a proper library for forms out there. This is why we are putting the efforts into spec’ing, architecting and building the library out there in public.
What’s the plan?
For a few days already we were talking and ploting the architecture, raising and solving problems on the whiteboard. At the moment we are ready to to start coding, but first — we’ll try to write out ideas.
Integrity with flux
During 3 versions, we tried different approaches, to have forms:
- Live on their own
- Live in the flux lifecycle
- Something in between the first two
At the moment, we think the best way to have responsive forms (as of fast response time), we have to make the library live on its own.
And for architectural integration (flux/redux/mobx/etc), we hope to make initial flux and redux versions ourselves and then get some help from community as the library becomes the best forms library out there.
Let’s get to the point
Lifecycle
For forms lifecycle, we will be using somewhat flux-ish architecture of actions and at first and it’s gonna be quick-and-dirty switches and immutable objects. Later on, we will incorporate simplr-flux package, as we will open-source it soon.
Principles
- The source code is written in TypeScript.
- Architecture is compile-time checked as much as possible.
- Declarative API first. An imperative API is not going to be first-class citizen.
- Simplicity in developer-facing API, performance in the backend.
- Have React Fiber in mind.
- Validation is first-class citizen.
Code samples
To get the idea how JSX looks:
<Form onSubmit={this.onFormSubmit}>
<Text name="FirstName">
<Validators.MinLength value={3}
<Validators.MaxLength value={10}
</Text>
<Submit>Submit</Submit>
</Form>
And the rendered DOM:
<form>
<input type="text" name="FirstName" />
<button type="submit">Submit</button>
</form>
As you can see, no wrappers, no custom elements, divs, spans or anything, just a clean form with inputs and buttons.
Having clean output really helps with styling, because you see the same structure in JSX, except for
simplr-forms specific validators inside text element.
For now, we have to render validators in the virtual DOM, because
inputelements cannot have childrens (at least some of them).
That will change with React Fiber coming in as it will have the ability to return array of elements in the
render function. With that change, we will be able to render validators alongside, but, of course, that is yet to come.
Validators
A really important functionality while using forms is data validation.
And the main pains behind validation are:
- Reusability of validators code
- An easy way of defining the requirements of your data
Imperative API is probably the worst solution for this, which leaves a nice spot for declarative one. Unfortunately, there is no way of doing that today, but…
We have another package
simplr-validation, which was released internally on Feb 15, 2016 and did not change much since. We had 19 versions of the package and most of them were adding functionality, not changing the core principles. Which means the package is done right from the start.
More interesting use case
Want to have your mind blown?
Suppose you need to validate your username to be:
- 4 to 128 symbols of length
- Unique in your system, which is a call to your server
Take a second to think how would you approach it today. Then look at this:
import { MinLength, MaxLength, DebouncerValidator } from "simplr-forms-core/validators";
<Text name="username">
<MinLength value={4}
<MaxLength value={128}
/*
Waits for 3 seconds before executing to the next validator.
If the value changes before debouncer expires,
the validation will be stopped and rejected as an error.
Also, form store will ignore validation changes until the value is the newest,
therefore store will update validation result to Valid or Invalid
only after ALL needed validators are executed.
*/
<DebouncerValidator value={3000} />
<CostyValidatorToTheServer />
</Text>
What you see here is a declarative way to check all of the requirements AND do it in an optimal manner, which is debouncing for some time until the user stops typing the username.
Mind blown? 🎉 I hoped so. 👏
Next in series
Part 2: Core, validation, testing.
|
https://medium.com/simplr/part-1-simplr-forms-declarative-forms-for-react-why-are-we-doing-this-db46a910dbd4
|
CC-MAIN-2017-43
|
refinedweb
| 1,039
| 65.32
|
After my frustrations displaying simple alerts and Open File dialogs in Xcode’s Swift playgrounds, I thought that I had better try something more constructive. This article starts to look at some key features that scripters need, and how accessible they are in Xcode’s Swift playgrounds.
First and foremost, scripts range in scale from quick-and-dirty few-liners to solve a simple problem, up to large multi-file projects. So the overhead, in terms of coding, effort, and skills, must also scale well. When I need to write a quick script to distribute files from one folder to several others, sorting by file extension, it should take a few minutes, and need just a few lines of code.
Little problems need little solutions, not a week of full-blown app development.
Swift is a highly versatile multi-paradigm language which can, for instance, be used as a functional language. But it also needs to work as a simple, imperative language, that starts at the first line of code and plods through to the end. So far, I see nothing in the Swift language to prevent that simple, linear style of coding, and lots of scope for tackling more complex problems using more advanced features of the language. The snag is that not all parts of macOS make that easy, so simple scripting is likely to need some simplifying wrappers for system calls.
Common little scripting tasks often deal with moving and copying files. This is one area where dealing with the macOS Foundation’s FileManager looks good, and a distinct improvement on AppleScript. Built long before OS X/macOS, AppleScript still deals with old file path conventions, horrible things like
Macintosh HD:Users:jsmith:Documents:myfile.text. Feed that to something expecting a macOS POSIX path like
/Users/jsmith/Documents/myfile.text and you have a nasty error. So AppleScript has to convert between the two.
Swift knows nothing about what happened in that halcyon past. You can
moveItem(at: URL, to: URL) using URLs, or
moveItem(atPath: String, toPath: String) using POSIX paths. FileManager is indeed the scripter’s best friend: want the contents of a complete file? Then just
contents(atPath: String) will deliver it. Want to see if two files have the same contents? Then
contentsEqual(atPath path1: String, andPath path2: String) will tell you, true or false.
If you do much scripting with files, have a browse through the documentation for FileManager and you be very happy indeed to be scripting with Swift. Yes, there’s also good support for iCloud-based items.
There is a catch, of course. At present, you have to fart about with some niceties before a playground will allow you to use these. As ever, this is not exactly clearly documented anywhere, but I discovered that, for example,
import Cocoa
import Foundation
var str = "Hello, playground"
let FM = FileManager.default
do {
try FM.attributesOfItem(atPath: "/Users/hoakley/Documents")
} catch let error as NSError {
print("Error: \(error.domain)")
}
works very nicely, and returns a great long list of the attributes of the item at the POSIX path which you feed it.
Of course most of the examples in books are likely to lead you to
let FM = NSFileManager()
as the initialisation, which fails on two counts. Swift playgrounds helpfully tells you one of them, that NSFileManager has been renamed FileManager, so you then try
let FM = FileManager()
which fails on the other count, which Swift playgrounds won’t tell you about, but it’s the wrong form of initialisation, and just returns an inscrutable error when you try running it.
I have also been skimming through Erica Sadun’s superb book on Swift playgrounds, which is invaluable. There is a catch here too: as an ace Swift coder, she uses powerful language features to structure her playgrounds. She has one playground, written in up-to-date Swift 3, which works beautifully, and counts the number of words in text files which you drop onto its ‘catch’ area. It is elegantly implemented in a style which might make many scripters turn and run for AppleScript. I will leave you with screenshots of its code, as an exercise in understanding Swift.
If you thought that AppleScript’s droplet harness was a pain, then you might want something more direct in Swift scripting.
Essential reading
Erica Sadun, Playground Secrets and Power Tips, Apple iBooks store.
Apple, Using Swift with Cocoa and Objective-C, free from Apple iBooks store.
|
https://eclecticlight.co/2016/11/24/xcode-swift-playgrounds-2-scripting-files/?like_comment=7702&_wpnonce=685c50a36e
|
CC-MAIN-2020-16
|
refinedweb
| 745
| 62.07
|
Aspect…
Have.
In my previous blog post I hopefully was able to demonstrate how low the entry barrier is to asynchronous remote communication. It´s as easy as hosting a service like this
10 using(var serverSpace = new CcrSpace().ConfigureAsHost("wcf.port=8000"))
11 {
12 serverSpace.HostPort(
13 serverSpace.CreateChannel<string>(t => Console.WriteLine(t)),
14 "MyService");
and connecting to such a service like this:
16 using(var clientSpace = new CcrSpace().ConfigureAsHost("wcf.port=0"))
17 {
18 var s = clientSpace.ConnectToPort<string>("localhost:8000/MyService");
19
20 s.Post("hello, world!");
Under the hood this is net.tcp WCF communication like Microsoft wants you to do it. But on the surface the CCR Space in conjunction with the Xcoordination Application Space provides you with an easy to use asynchronous API based on *Microsoft´s Concurrency Coordination Runtime.
Calling a service and not expecting a response, though, is not what you want to do most of the time. Usually you call a service to have it process some data and return a result to the caller (client). So the question is, how can you do this in an asynchronous communication world? WCF and the other sync remote communication APIs make that a no brainer. That´s what you love them for. So in order to motivate you to switch to an async API I need to prove that such service usage won´t become too difficult, I guess.
What do you need to do to have the server not only dump the message you sent it to the console, but to return it to the caller? How to write the most simple echo service? Compared to the ping service you saw in my previous posting this is not much of a difference. Here´s the service implementation:
11 public static string Echo(string text)
12 {
13 Console.WriteLine("SERVER - Processing Echo Request for '{0}'", text);
14 return text;
15 }
Yes, that´s right. It´s a method with a return value like you would have used it in a WCF service. Just make sure the request and the response message types are [Serializable].
And this is the code to host this service:
19 using(var serverSpace = new CcrSpace().ConfigureAsHost("wcf.port=8000"))
20 {
21 var chService = serverSpace.CreateChannel<string, string>(Echo);
22 serverSpace.HostPort(chService, "Echo");
The only difference is in the two type parameters to CreateChannel(). They specify this is request/response channel. The first type is for the request type, the second for the response type.
When does it start to become difficult, you might ask? Async communication is supposed to be difficult. Well, check out the client code. First it needs to connect to the service:
40 using(var clientSpace = new CcrSpace().ConfigureAsHost("wcf.port=0"))
41 {
42 var chServiceProxy = clientSpace.ConnectToPort<string, string>("localhost:8000/Echo");
This isn´t difficult, or is it? Like before for the one-way ping service the client just needs to specify the address of the service. But of course the local channel needs to match the remote channel´s signature. So the client passes in two type parameters for the request and response message types.
And now the final part: sending the remote service a message and receiving a response. Sure, this looks a bit different from your usual remote function call. But I´d say it´s not all too inconvenient:
44 chServiceProxy
45 .Request("hello!")
46 .Receive(t => Console.WriteLine("Answer from echo service: {0}", t));
You send the request by calling the Request() method with the request message. Then you wait for the response using the Receive() method passing it a continuation. It´s a method to be called once the response arrives at the caller.
Remember: This is asynchronous communication. That means the code after the Receive() will be very likely executed before (!) that of the continuation. So if you want to have a conversation between client and service you´d need to be a bit careful how you model it. Don´t just write channel.Request().Receive() statements one after another. But let that be a topic for a future posting.
For now I hopefully was able to show you how easy request/response communication can be even in the async world. There are of course other ways to accomplish this, but here I wanted to show you the most simple way so you can get started without jumping through too many mental async hoops.
This is all well and good – but it won´t work in your WinForms application. Because when the response arrives it arrives on a different thread than the request had been issued from. And that means you can just simply display the result in a WinForms control. That can only be done on the WinForms thread.
I take up the challenge and demonstrate to you this is easy too using the CCR Space. Check out this tiny WinForms application:
Whatever you type into the text box gets sent to the echo service and returned in upper case letters only. You can guess how easy such a service will look:
17 using (var serverSpace = new CcrSpace().ConfigureAsHost("wcf.port=8000"))
18 {
19 serverSpace.HostPort(
20 serverSpace.CreateChannel<string, string>(t => t.ToUpper()),
21 "Echo");
Also the client is as easy as before:
23 using (var clientSpace = new CcrSpace().ConfigureAsHost("wcf.port=0"))
24 {
25 var chService = clientSpace.ConnectToPort<string, string>("localhost:8000/Echo");
But then to let the client form know about the service, the service channel is injected like any other ordinary dependency:
29 Application.Run(new Form1(chService));
…
15 public partial class Form1 : Form
16 {
17 private Port<CcrsRequest<string, string>> echoService;
18
19 public Form1(Port<CcrsRequest<string, string>> echoService)
20 {
21 InitializeComponent();
22
23 this.echoService = echoService;
24 }
And how does the client form interact with the service so there will be no cross-thread calling problem?
26 private void button1_Click(object sender, EventArgs e)
27 {
28 this.echoService
29 .Request(this.textBox1.Text)
30 .Receive(t => listBox1.Items.Add(t),
31 CcrsHandlerModes.InCurrentSyncContext);
32 }
It´s the same pattern as above: call Request() on the channel and append Receive() to this call. However, as the final parameter to Receive() pass in CcrsHandlerModes.InCurrentSyncContext. This advises the client Space to switch to the WinForms synchronization context before executing the response message handler.
Notice, however, the signature of the remote service port. It´s Port<CcrsRequest<TRequest, TResponse>>. By wrapping the application message in a CcrsRequest<,> message the channel.Request().Receive() magic is possible.
Asynchronous communication certainly is different from synchronous. It takes some time wrap your head around it. And you certainly don´t want to start with it slinging threads yourself. But with a pretty high level async API like the CCR and the CCR Space provides, it´s not beyond the average developers reach, I´d say.
What´s next? I guess you want to know how errors are handled in distributed async scenarios. Let that be the topic of my next posting. Until then trust me: It´s much easier than with WCF ;-)
If you´ve read my previous posts about why I deem WCF more of a problem than a solution and how I think we should switch to asynchronous only communication in distributed application, you might be wondering, how this could be done in an easy way.)
17 using(var serverSpace = new CcrSpace().ConfigureAsHost("wcf.port=8000"))
18 {
19 var chService = serverSpace.CreateChannel<string>(Ping);
20 serverSpace.HostPort(chService, "Ping");
21
22 Console.WriteLine("SERVER - Running...");
23 Console.ReadLine(); // keep alive.
In my previous blog post I argued WCF was not the most usable and most easy to learn way for communication in distributed applications. This is due to its focus on synchronous communication (even though you can do asynchronous communication as well).
Distributed applications by their very nature cannot communicate synchronously. Their parts are running on different threads (or even machines). And communication between threads is asynchronous. Always. And if it appears otherwise then there is some machinery behind the scenes working hard on that.
Now, if distributed functional units are running on different threads, why would anyone even try to hide that fact? Why do Web services, Enterprise Services, .NET Remoting, and also WCF abstract away this inherent parallelism? Well, it´s because asynchronous communication and asynchronous systems are harder to reason about.
Great, you could say. That´s what I want: easy to reason about systems. Isn´t that purpose of abstraction? Making complex things easy?
Well, sure. We love C# as an abstraction for how to command a processor. Nobody would want to go back to low level assembler coding.
But not just any abstraction is a good abstraction. Some abstractions are, well, “unhealthy” or plain wrong. Think of the abstraction “man is like a machine” and all its implications for the health industry.
Abstractions usually have a limited scope. Treating humans like a machine when “repairing” a wounded finger might be ok. But when treating cancer, physicians surely should view their patients differently.
So the question is: When to stop using an abstraction? Or whether to use it at all.
That´s what I´m asking myself regarding WCF. When should I stop using it and start coping with the asynchronous nature of distributed systems myself?
My current answer is: I should not even start to do remote communication with any synchronous API. And my reason is simple:
Taken together this means: Systems start small, even when distributed. So they use a sync communication API. Then they grow, need to scale, and thus need to switch to async communication – which then is probably infeasible or very hard to accomplish.
So when I´m saying, “You should not even start to do remote communication with any synchronous API.”, I´m just trying to apply lessons learned. If we know switching from sync to async is hard, and if it´s very likely this switch will be necessary some time in the future (because, well, “you never know” ;-)… then, I´d say, it´s just prudent to not go down the path, that might be easier at the moment, but will be much harder in the future.
There´s a time and place to do synchronous communication between functional units. And there´s a time and place to do asynchronous communication. The latter being always the case when developing multi-processor code or distributed code.
The earlier a developer learns to see this boundary the better.
Enough with theory for now. Let´s get our hands dirty with some code.
How can you start with asynchronous programming? Let me say that much: Don´t even try to manage threads yourself! Instead leave that to some framework or runtime. Concentrate on an easy programming model.
The programming model I´m going to use is purely message oriented and based on “pipes” through which those messages flow. A client “poures” message into such a “pipe”, and a service at the other end “ladles” them out in order to work on them.
You can implement such a programming model using queues. But I recommend you don´t think about doing it yourself ;-) Instead use an existing component like Microsoft´s Concurrency Coordination Runtime (CCR). It´s part of Robotics Studio, but can also be use standalone. All I´m gonna show you in terms of async and distributed communication will be based on the CCR. I just love it :-)
Here´s the notorious Hello World code snippet to show you, how messages are sent using the CCR:
using Microsoft.Ccr.Core;
var p = new Port<string>();
p.Post("hello, world!");
The “pipes” I was referring to are called Ports in CCR lingo. They are queues – but thread-safe ones.
So this is how a client sends a message off to some service. It just stuffs it into a Port and forgets about it. The message is processed asynchronously by the service.
To let a service listen for messages on a Port is somewhat cumbersome using just the CCR. But I´ll show you anyway, even though later on abstraction layers on top of the CCR will shield you from such details (most of the time).
Assume there is a service method like this:
void ProcessText(string text)
{
Console.WriteLine("processing: {0}", text);
}
Never mind its simplicity. For the purpose of explaining the communication basics it´s sufficient. What you want to learn is how to make it listen on a Port. Here´s how:
Arbiter.Activate(
new DispatcherQueue(),
Arbiter.Receive(
true,
p,
ProcessText
)
);
You need to set up a so called Receiver on the Port. The Receiver is associating the service method (ProcessText()) with the Port (p) as an event handler. The DispatcherQueue then is used to schedule event handling by the service method on some thread from a pool. Messages posted to the Port are automatically processed asynchronously and in parallel on background threads.
But as I said: Don´t feel frustrated right now. You won´t need to wire-up event handlers like that very often just to do distributed communication the async way. If you nevertheless want to learn more about the CCR, feel free to browse its documentation.
The only thing you have to remember is: Async communication flows through CCR Ports from clients to services as messages. You can even call those Ports channels, if you like – and feel at home since WCF has channels, too ;-) for the same reason as the CCR. Because WCF is message oriented and deep down internally even works asynchronously. It just does not show that to you very often or in an elegant way.
Before I move on to API details a quick word about why I think asynchronicity is fundamental for communication in distributed applications. Look at this code:
IService s = …
s.ProcessText("hello, world!");
What expectations to do have about how the service is delivered? Think about it for a moment…
If you´re like me, you probably thought at least…
Or maybe only the first thought occurred to you? That would be quite naturally, I guess. Because why should you even think about the other points at all? They are non-issues for method calls.
And that´s what I find important: Since the usual Web service-, .NET Remoting-, Enterprise Services-, WCF-call is a method call like any other local method call, developers tend to don´t even think about all that what is so different in remote communication.
As long as you cannot decide by looking at a service usage whether it´s a local service or a remote service you can´t be sure if all this has been taken into account. That I find very disturbing. To me a remote service usage through a regular method call like above thus is kind of misleading, fostering falling prey to fallacies, or simply lying. It´s lying about very basic properties, because 99% of the time we´re looking at method calls and don´t have to bother about all this.
Now, that´s all different when you see this in a code base:
Port<ProcessTextMsg> service = …
service.Post(new ProcessTextMsg{Text = "hello, world!"});
You immediately realize: This is not just any ordinary request for a service. This is special, because it´s asynchronous and clearly message oriented. And then you start thinking about whether the properties of this service usage are really like what you expect.
You see, I don´t want to take away the usual sync method call for remote communication from you lightly. I think there are at least 5 good reasons why distributed communication should always be async. Always.
But of course, you don´t want to pay too high a price for such “honest communication”. So let´s see what we can do about usability when going asynchronous. Stay tuned for my next blog.
Since I´m mostly concerned with software architecture and my clients are asking again and again when I´m going to write a book about the topic, I finally decided to set out and compile the material to go into the book. And I decided to do it publicly, in a new blog....
Please find the sample code for my presentations at Software Architect 2008 on Aspect Oriented Programming with PostSharp and Software Transactional Memory with NSTM here for download:
If you´ve any questions, feel free to contact me by email.
Enjoy!
|
http://weblogs.asp.net/ralfw/
|
CC-MAIN-2014-10
|
refinedweb
| 2,736
| 66.33
|
Your Account
Each algorithm in this book is presented in its own section where you will find individual performance data on the behavior of the algorithm. In this bench
marking chapter, we present our infrastructure to evaluate algorithm performance. It is important to explain the precise means by which empirical data is
computed, to enable the reader to both verify that the results are accurate and understand where the assumptions are appropriate or inappropriate given the context in which the algorithm is intended to be used..
There are numerous ways by which algorithms can be analyzed. Chapter 2 presented the theoretic formal treatment, introducing the concepts of worst-case and average-case analysis. These theoretic results can be empirically evaluated in some cases, though not all. For example, consider evaluating the performance of an algorithm to sort 20 numbers. There are 2.43*1018 permutations of these 20 numbers, and one cannot simply exhaustively evaluate each of these permutations to compute the average case. Additionally, one cannot compute the average by measuring the time to sort all of these permutations. We find that we must rely on statistical measures to assure ourselves that we have properly computed the expected performance time of the algorithm.
In this chapter we briefly present the essential points to evaluate the performance of the algorithms. Interested readers should consult any of the large number of available textbooks on statistics for more information on the relevant statistical information used to produce the empirical measurements in this book.
To compute the performance of an algorithm, we construct a suite of T independent trials for which the algorithm is executed. Each trial is intended to execute an algorithm on an input problem of size n. Some effort is made to ensure that these trials are all reasonably equivalent for the algorithm. When the trials are actually identical, then the intent of the trial is to quantify the variance of the underlying implementation of the algorithm. This may be suitable, for example, if it is too costly to compute a large number of independent equivalent trials. The suite is executed and millisecond-level timings are taken before and after the observable behavior. When the code is written in Java, the system garbage collector is invoked immediately prior to launching the trial; although this effort can’t guarantee that the garbage collector does not execute during the trial, it is hoped to reduce the chance that extra time (unrelated to the algorithm) is spent. From the full set of T recorded times, the best and worst performing times are discarded as being “outliers.” The remaining T–2 time records are averaged, and a standard deviation is computed using the following formula:
∑(xi – x)2
i
σ = -----------------------------
n – 1
where xi is the time for an individual trial and x is the average of the T–2 trials. Note here that n is equal to T–2, so the denominator within the square root is T–3. Calculating averages and standard deviations will help predict future performance, based on Table A-1, which shows the probability (between 0 and 1) that the actual value will be within the range [x–k*σ,x+k*σ], where σ represents the standard deviation value computed in the equation just shown. The probability values become confidence intervals that declare the confidence we have in a prediction.
Table A-1. Standard deviation table
For example, in a randomized trial, it is expected that 68.27% of the time the result will fall within the range [x–σ, x+σ].
When reporting results, we never present numbers with greater than four decimal digits of accuracy, so we don’t give the mistaken impression that we believe the accuracy of our numbers extends that far. When the computed fifth and greater digits falls in the range [0, 49,999], then these digits are simply truncated; otherwise, the fourth digit is incremented to reflect the proper rounding. This process will convert a computation such as 16.897986 into the reported number 16.8980.
In this book we include numerous tables showing the performance of individual algorithms on sample data sets. We used two different machines in this process:
Desktop PC
We used a reasonable “home office” personal computer. This computer had a Pentium(R) 4 CPU 2.8Ghz with 512 MB of RAM.
High-end computer.
On Java test cases, the current system time (in milliseconds) is determined immediately prior to, and after, the execution of interest. The code in Example A-1 measures the time it takes to complete the task. In a perfect computer, the 30 trials should all require exactly the same amount of time. Of course this is unlikely to happen, since modern operating systems have numerous background processing tasks that share the same CPU on which the performance code executes.
Example A-1. Java example to time execution of task
public class Main {
public static void main (String[]args) {
TrialSuite ts = new TrialSuite();
for (long len = 1000000; len <= 5000000; len += 1000000) {
for (int i = 0; i < 30; i++) {
System.gc();
long now = System.currentTimeMillis();
/** Task to be timed. */
long sum = 0;
for (int x = 1; x <= len; x++) { sum += x; }
Example A-1. Java example to time execution of task (continued)
long end = System.currentTimeMillis();
ts.addTrial(len, now, end);
}
}
System.out.println (ts.computeTable());
} }
The TrialSuite class stores trials by their size. Once all trials have been added to the suite, the resulting table is computed. To do this, the running times are added together to find the total sum, the minimum value, and the maximum value. As described earlier, the minimum and maximum values are removed from the set when computing the average and standard deviation.
For C test cases, we developed a benchmarking library to be linked with the code to test. In this section we briefly describe the essential aspects of the timing code and refer the interested reader to the code repository for the full source.
Primarily created for testing sort routines, the C-based infrastructure can be linked against existing source code. The timing API takes over responsibility for parsing the command-line arguments:
usage: timing [-n NumElements] [-s seed] [-v] [OriginalArguments]
-n declares the problem size [default: 100,000]
-v verbose output [default: false]
-s # set the seed for random values [default: no seed]
-h print usage information
The timing library assumes a problem will be attempted whose input size is defined by the [-n] flag. To produce repeatable trials, the random seed can be set with [-s seed]. To link with the timing library, a test case provides the following functions:
void problemUsage() Report to the console the set of [OriginalArguments] supported by the specific code. Note that the timing library parses the declared timing parameters, and remaining arguments are passed along to the prepareInput function.
void prepareInput (int size, int argc, char **argv)
Depending upon the problem to be solved, this function is responsible for building up the input set to be processed within the execute method. Note that this information is not passed directly to execute via a formal argument, but instead should be stored as a static variable within the test case.
void postInputProcessing()
If any validation is needed after the input problem is solved, that code can execute here.
void execute()
This method will contain the body of code to be timed. Thus there will always be a single method invocation that will be part of the evaluation time. When the execute method is empty, the overhead (on the high-end computer) is, on average, .002 milliseconds and is considered to have no impact on the overall reporting.
The test case in Example A-2 shows the code task for the addition example.
Example A-2. Task describing addition of n numbers
extern int numElements; /* size of n */ void problemUsage() { /* none */ } void prepareInput() { /* none */ } void postInputProcessing() { /* None */ }
void execute() { int x; long sum = 0; for (x = 1; x <= numElements; x++) { sum += x; }
}
Each execution of the C function corresponds to a single trial, and so we have a set of shell scripts whose purpose is to execute the code under test repeatedly in order to generate statistics. For each suite, a configuration file is constructed to represent the trial suite run. Example A-3 shows the config.rc for the value-based sorting used in Chapter 4.
Example A-3. Sample configuration file to compare sort executions
# configure to use these BINS BINS=./Insertion ./Qsort_2_6_11 ./Qsort_2_6_6 ./Qsort_straight
# configure suite TRIALS=10 LOW=1 HIGH=16384 INCREMENT=*2
This specification file declares that the set of executables will be three variations of QUICKSORT with one INSERTION SORT. The suite consists of problem sizes ranging from n=1 to n=16,384, where n doubles after each run. For each problem size, 10 trials are executed. The best and worst performers are discarded, and the resulting generated table will have the averages (and standard deviations) of the remaining eight trials.
Example A-4 contains the compare.sh script that generates an aggregate set of information for a particular problem size n.
#!/bin/bash
Example A-4. compare.sh benchmarking script
CODE=`dirname $0`
SIZE=20 NUM_TRIALS=10 if [ $# -ge 1 ] then
SIZE=$1 NUM_TRIALS=$2 fi
if [ "x$CONFIG" = "x" ]
then echo "No Configuration file (\$CONFIG) defined" exit 1
fi
if [ "x$BINS" = "x" ]
then if [ -f $CONFIG ] then
BINS=`grep "BINS=" $CONFIG | cut -f2- -d'='` EXTRAS=`grep "EXTRAS=" $CONFIG | cut -f2- -d'='` fi
if [ "x$BINS" = "x" ]
then echo "no \$BINS variable and no $CONFIG configuration " echo "Set \$BINS to a space-separated set of executables"
fi fi
echo "Report: $BINS on size $SIZE"
echo "Date: `date`"
echo "Host: `hostname`"
RESULTS=/tmp/compare.$$
for b in $BINS
do
TRIALS=$NUM_TRIALS
# start with number of trials followed by totals (one per line)
echo $NUM_TRIALS > $RESULTS
while [ $TRIALS -ge 1 ]
Example A-4. compare.sh benchmarking script (continued)
do $b -n $SIZE -s $TRIALS $EXTRAS | grep secs | sed 's/secs//' >> $RESULTS TRIALS=$((TRIALS-1))
done
# compute average/stdev
RES=`cat $RESULTS | $CODE/eval`
echo "$b $RES"
rm -f $RESULTS done
compare.sh makes use of a small C program, eval, which computes the average and standard deviation using the method described at the start of this chapter. This compare.sh script is repeatedly executed by a manager script, suiteRun.sh, that iterates over the desired input problem sizes specified within the config.rc file, as shown in Example A-5.
Example A-5. suiteRun.sh benchmarking script
#!/bin/bash CODE=`dirname $0`
# if no args then use default config file, otherwise expect it if [ $# -eq 0 ] then
CONFIG="config.rc"
else CONFIG=$1 echo "Using configuration file $CONFIG..."
# export so it will be picked up by compare.sh export CONFIG
# pull out information if [ -f $CONFIG ] then
BINS=`grep "BINS=" $CONFIG | cut -f2- -d'='`
TRIALS=`grep "TRIALS=" $CONFIG | cut -f2- -d'='`
LOW=`grep "LOW=" $CONFIG | cut -f2- -d'='`
HIGH=`grep "HIGH=" $CONFIG | cut -f2- -d'='`
INCREMENT=`grep "INCREMENT=" $CONFIG | cut -f2- -d'='`
else echo "Configuration file ($CONFIG) unable to be found." exit -1
fi
# headers
HB=`echo $BINS | tr ' ' ','`
echo "n,$HB"
Example A-5. suiteRun.sh benchmarking script (continued)
# compare trials on sizes from LOW through HIGH SIZE=$LOW REPORT=/tmp/Report.$$ while [ $SIZE -le $HIGH ] do
# one per $BINS entry
$CODE/compare.sh $SIZE $TRIALS | awk 'BEGIN{p=0} \
{if(p) { print $0; }} \
/Host:/{p=1}' | cut -d' ' -f2 > $REPORT
# concatenate with , all entries ONLY the average. The stdev is
# going to be ignored
# -----------------------------------------------------------VALS=`awk 'BEGIN{s=""}\
{s = s "," $0 }\
END{print s;}' $REPORT`
rm -f $REPORT
echo $SIZE $VALS
# $INCREMENT can be "+ NUM" or "* NUM", it works in both cases. SIZE=$(($SIZE$INCREMENT)) done
The Scheme code in this section measures the performance of a series of code executions for a given problem size. In this example (used in Chapter 1) there are no arguments to the function under test other than the size of the problem to compute. First we list some helper functions used to compute the average and standard deviation for a list containing execution times, shown in Example A-6.
Example A-6. Helper functions for Scheme timing
;; foldl: (X Y -> Y) Y (listof X) -> Y
;; Folds an accumulating function f across the elements of lst.
(define (foldl f acc lst)
(if (null? lst)
acc
(foldl f (f (car lst) acc) (cdr lst))))
;; remove-number: (listof number) number -> (listof number) ;; remove element from list, if it exists (define (remove-number nums x)
(if (null? nums) '()
(if (= (car nums) x) (cdr nums)
(cons (car nums) (remove-number (cdr nums) x)))))
;; find-max: (nonempty-listof number) -> number ;; Finds max of the nonempty list of numbers. (define (find-max nums)
(foldl max (car nums) (cdr nums)))
Example A-6. Helper functions for Scheme timing (continued)
;; find-min: (nonempty-listof number) -> number ;; Finds min of the nonempty list of numbers. (define (find-min nums)
(foldl min (car nums) (cdr nums)))
;; sum: (listof number) -> number ;; Sums elements in nums. (define (sum nums)
(foldl + 0 nums))
;; average: (listof number) -> number
;; Finds average of the nonempty list of numbers.
(define (average nums)
(exact->inexact (/ (sum nums) (length nums))))
;; square: number -> number ;; Computes the square of x. (define (square x) (* x x))
;; sum-square-diff: number (listof number) -> number ;; helper method for standard-deviation (define (sum-square-diff avg nums)
(foldl (lambda (a-number total)
(+ total (square (- a-number avg))))
0
nums))
;; standard-deviation: (nonempty-listof number) -> number ;; Calculates standard deviation. (define (standard-deviation nums)
(exact->inexact
(sqrt (/ (sum-square-diff (average nums) nums)
(length nums)))))
The helper functions in Example A-6 are used by the timing code in Example A-7, which runs a series of test cases for a desired function.
Example A-7. Timing Scheme code
;; Finally execute the function under test on a problem size
;; result: (number -> any) -> number
;; Computes how long it takes to evaluate f on the given probSize.
(define (result f probSize)
(let* ((start-time (current-inexact-milliseconds))
(result (f probSize))
(end-time (current-inexact-milliseconds)))
(- end-time start-time)))
;; trials: (number -> any) number number -> (listof number)
;; Construct a list of trial results
(define (trials f numTrials probSize)
(if (= numTrials 1)
(list (result f probSize))
Example A-7. Timing Scheme code (continued)
(cons (result f probSize) (trials f (- numTrials 1) probSize))))
;; Generate an individual line of the report table for problem size (define (smallReport f numTrials probSize) (let* ((results (trials f numTrials probSize))
(reduced (remove-number (remove-number results (find-min results)) (find-max results))))
(display (list 'probSize: probSize 'numTrials: numTrials (average reduced)))
(newline)))
;; Generate a full report for specific function f by incrementing ;; one to the problem size (define (briefReport f inc numTrials minProbSize maxProbSize)
(if (>= minProbSize maxProbSize) (smallReport f numTrials minProbSize) (begin
(smallReport f numTrials minProbSize)
(briefReport f inc numTrials (inc minProbSize) maxProbSize))))
;; standard doubler and plus1 functions for advancing through report
(define (double n) (* 2 n))
(define (plus1 n) (+ 1 n))
The largeAdd function from Example A-8 adds together a set of n numbers. The output generated by (briefReport largeAdd millionplus 30 1000000 5000000) is shown in Table A-2.
Example A-8. largeAdd Scheme function
;; helper method
(define (millionplus n) ( + 1000000 n))
;; Sum numbers from 1..probSize (define (largeAdd probSize) (let loop ([i probSize] [total 0])
(if (= i 0) total (loop (sub1 i) (+ i total)))))
Table A-2. Execution time for 30 trials of largeAdd
It is instructive to review the actual results when computed on the same platform, in this case a Linux 2.6.9-67.0.1.ELsmp i686 (this machine is different from the desktop PC and high-end computer mentioned earlier in this chapter). We present three tables (Tables A-3, A-5, and A-6), one each for Java, C, and Scheme. In each table, we present the millisecond results and a brief histogram table for the Java results.
Table A-3. Timing results of 30 computations in Java
The aggregate behavior of Table A-3 is detailed in histogram form in Table A-4. We omit from the table rows that have only zero values; all nonzero values are shaded in the table.
To interpret these results for Java, we turn to statistics. If we assume that the timing of each trial is independent, then we refer to the confidence intervals described earlier. If we are asked to predict the performance of a proposed run for n=4,000,000, then we can say that with 95.45% probability the expected timing result will be in the range [32.9499, 34.6215].
In raw numbers, the C implementation appears to be about three times faster. The histogram results are not as informative, because the timing results include fractional milliseconds, whereas the Java timing strategy reports only integer values.
The final table contains the results for Scheme. The variability of the execution runs in the Scheme implementation is much higher than Java and C. One reason may be that the recursive solution requires more internal bookkeeping of the computation.
Table A-6. Timing results of 30 computations in Scheme
Instead of using millisecond-level timers, nanosecond timers could be used. On the Java platform, the only change in the earlier timing code would be to invoke System.nanoTime() instead of accessing the milliseconds. To understand whether there is any correlation between the millisecond and nanosecond timers, the code was changed as shown in Example A-9.
Example A-9. Using nanosecond timers in Java
TrialSuite tsM = new TrialSuite();
TrialSuite tsN = new TrialSuite();
for (long len = 1000000; len <= 5000000; len += 1000000) {
for (int i = 0; i < 30; i++) {
long nowM = System.currentTimeMillis();
long nowN = System.nanoTime();
long sum = 0;
for (int x = 0; x < len; x++) { sum += x; }
long endM = System.currentTimeMillis();
long endN = System.nanoTime();
Example A-9. Using nanosecond timers in Java (continued)
tsM.addTrial(len, nowM, endM); tsN.addTrial(len, nowN, endN); } }
Table A-3, shown earlier, contains the millisecond results of the timings, and Table A-7 contains the results when using the nanosecond timer. The clearest difference is that the standard deviation has shrunk by an order of magnitude, thus giving us much tighter bounds on the expected execution time of the underlying code. One can also observe, however, that the resulting timings still have issues with precision—note the large standard deviation for the n=5,000,000 trial. This large deviation corresponds with the “spike” seen in this case in Table A-3.
Table A-7. Results using nanosecond timers
Because we believe using nanosecond-level timers does not add sufficient precision or accuracy, we continue to use millisecond-level timing results within the benchmark results reported in the algorithm chapters. We also continue to use milliseconds to avoid giving the impression that our timers are more accurate than they really are. Finally, nanosecond timers on Unix systems are not yet standardized, and there are times when we wished to compare execution times across platforms, which is another reason why we chose to use millisecond-level timers throughout this book.
Why such variation among what should otherwise be a rather consistent behavior? Reviewing the data from Table A-3, there appear to be “gaps” of 15 or 16 milliseconds in the recorded trial executions. These gaps reflect the accuracy of the Java timer on the Windows platform, rather than the behavior of the code. These variations will appear whenever System.currentTimeMillis() is executed, yet the values are significant only when the base execution times are very small (i.e., near 16 milliseconds).
The Sun engineers who developed Java are aware of the problem of timers for the Windows platform, and have no immediate plans to resolve the issue (and this has been the situation for nearly six years now). See for clarification.
If you enjoyed this excerpt, buy a copy of Algorithms in a Nutshell .
© 2014, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
|
http://archive.oreilly.com/pub/a/software-engineering/excerpts/algorithms-in-nutshell/benchmarking.html
|
CC-MAIN-2014-52
|
refinedweb
| 3,333
| 51.48
|
PiecewiseDecay¶
- class
paddle.fluid.dygraph.
PiecewiseDecay(boundaries, values, begin, step=1, dtype='float32')[source]
Piecewise decay scheduler.
The algorithm can be described as the code below.
boundaries = [10000, 20000] values = [1.0, 0.5, 0.1] if global_step < 10000: learning_rate = 1.0 elif 10000 <= global_step < 20000: learning_rate = 0.5 else: learning_rate = 0.1
- Parameters
boundaries (list) – A list of steps numbers. The type of element in the list is python int.
values (list) – A list of learning rate values that will be picked during different step boundaries. The type of element in the list is python float.
begin (int) – The begin step to initilize the global_step in the description above.
step (int, optional) – The step size used to calculate the new global_step in the description above. The defalult value is 1.
dtype (str, optional) – The data type used to create the learning rate variable. The data type can be set as ‘float32’, ‘float64’. The default value is ‘float32’.
- Returns
None.
Examples
import paddle.fluid as fluid boundaries = [10000, 20000] values = [1.0, 0.5, 0.1] with fluid.dygraph.guard(): optimizer = fluid.optimizer.SGD( learning_rate=fluid.dygraph.PiecewiseDecay(boundaries, values, 0) )
|
https://www.paddlepaddle.org.cn/documentation/docs/en/api/dygraph/PiecewiseDecay.html
|
CC-MAIN-2019-47
|
refinedweb
| 191
| 63.76
|
Chapter 13:In this chapter:
Two-Way Syncing
The Logic of Syncing
The Conduit Classes
Sales Conduit Sample Based on the Classes
Generic Conduit
You can implement two-way syncing using two different methods. While both methods rely on Palm sample code, that is where the similarity ends. The first is based on the conduit classes (commonly referred to as basemon and basetabl ) and the second on new code called Generic Conduit. Before delving into either approach, however, we need to discuss the logic involved in two-way, mirror image syncing.
The Logic of Syncing
There are two forms of syncing that occur between the desktop and the handheld. The quicker method is appropriately named "fast sync" and the other is likewise aptly named "slow sync." A fast sync occurs when the handheld is being synced to the same desktop machine that it was synced to the previous time. Because handhelds can be synced to multiple desktops, this is not the only possibility. As a result, there are quite a few logic puzzles that need sorting out when records don't match. Let's start with the easier, fast scenario.
Fast Sync
A fast sync occurs when a handheld syncs to the same desktop as it last did, so you can be assured that the delete/archive/modify bits from the handheld are accurate. In such cases, the conduit needs to do the following:
Examine the desktop data
The conduit reads the current desktop data into a local database.
Examine the handheld data
For each changed record on the handheld, the conduit does the following:
- If the record is archived, it adds the record to an archived database on the desktop and marks it in the local database as a pending delete. It deletes the archived record from the handheld.
- If deleted, it marks it in the local database as a pending delete and removes it from the handheld. (Remember, user-deleted records aren't actually deleted until a sync occurs; the user may not see them, but your application keeps them around for this very occasion.)
- If modified, it modifies it in the local database.
- If new--if the record doesn't exist in the local database--the conduit adds it.
Examine the local data
It is necessary to handle modified records in the local database by comparing them to the handheld records:
- If archived, it removes the record from the handheld, puts it in the archived database, and marks it as a pending delete in the local database.
- If a record is deleted, the conduit removes it from the handheld and marks it as a pending delete in the local database.
- If modified, it copies the modifications to the handheld and clears the modification flag from the record in the local database.
- If new, it copies the record to the handheld and clears the added flag from the record in the local database.
Dispose of the old data
Now the conduit deletes all records in the local database that are marked for deletion. At this point, all the records in the local database should match those on the handheld.
Write local database to desktop database
Finally, all the data is moved from the temporary local database back to permanent storage; the archive database is written out first and then the local database. A copy of the local database is also saved as a backup database--you will use this for slow sync.
Thorny Comparisons--Changes to the Same Records on Both Platforms
There are some very thorny cases of record conflicts we have to consider. When you give users the capability of modifying a record in more than one location, some twists result. The problem occurs when you have a record that can be changed simultaneously on the handheld and on the local database, but in different ways. For example, a customer record in our Sales application has its address changed on the handheld database and its name changed on the local database. Or a record was deleted on one platform and changed on another. The number of scenarios is so great that we require some formal rules to govern cases of conflict.
The Palm philosophy concerning such problems is that no data loss should occur, even at the price of a possible proliferation of records with only minor differences. Thus, in the case of a record with a name field of "Smith" on the handheld and "Smithy" on the local database, the end result is two records, each present in both databases. Here are the various possibilities and how this philosophy plays out into rules for actual conflicts.
A record is deleted on one database and modified on the other
The deleted version of the record is done away with, and the changed version is copied to the other platform.
A record is archived on one database and changed on the other
The archived version of the record is put in the archive database, and the changed version is copied to the other platform. Exception: if the archived version has been changed in exactly the same way, we do the right thing and end up with only one archived record.
A record is archived on one database and deleted on the other
The record is put in the archive database.
A record is changed on one database and changed differently on the other
The result is two records. This is true for records with the same field change, such as our case of "Smith" and "Smithy." It is also true for a record where the name field is changed on one record and the address field on the other. In this case, you also end up with two records. Thus these initial records:
yield the following records in fully synced mirror image databases:
A record is changed on one database and changed identically on the other
If a record is changed in both places in the same way (the same field contains the same new value in both places), the result is one record in both places.
This can get tricky, however. While it may be clear that "Smith" is not "Smithy", it is not so obvious that "Smith" is not "smith". Depending on the nature of your record fields, you may need to make case-by-case decisions about the meaning of identical.
Slow Sync
A slow sync takes place when the last sync of the handheld was not with this desktop. Commonly, this occurs when the user has more recently synced to another desktop machine. Less frequently, this happens because this is the first sync of a handheld. If the last sync of the handheld was not with this desktop, the modify/archive/delete bits are not accurate with respect to this desktop. They may be accurate with the desktop last synced to, but this doesn't help with the current sync scenario.
Since the modify/archive/delete bits aren't accurate, we've got to figure out how the handheld database has changed from the desktop database since the last sync. In order to do this, we need an accurate copy of the local database at the time of the last sync. This is complicated by the possibility that the local database may have changed since the last sync. The solution to this problem is to use the backup copy that we made after the last sync between these two machines--the last point at which these two matched.
Since this backup, both the handheld and the desktop database may have diverged. While it is true that all the changes to the desktop have been marked (changes, new, deleted, archived), it is not true for the handheld. Some or all of the changes to the handheld data were lost when the intervening sync took place; the deleted/archived records were removed, and the modified records were unmarked.
To deal with this problem, we need to use a slow sync. As the name implies, a slow sync looks at every record from the handheld. It copies them to an in-memory database on the desktop called the remote database and compares them to the backup database records. Here are the possibilities that need to be taken into account:
- The remote record matches a record in the backup database--nothing has changed.
- The remote record isn't present in the backup database--the record is new and is marked as modified.
- The backup record isn't present in the remote database--the remote record has been deleted (it could have been archived, in which case it has been archived on a different desktop). The record is marked as deleted.
- The backup record and the remote record are different--the remote record has been modified. The record is marked as changed.
At this point, we've got a copy of the remote database where each record has been left alone, marked as modified (due to being new or changed), or marked as deleted. Now the conduit can carry out the rest of the sync. Thus, the difference between the two syncs is the initial time required to mark records so that the two databases agree. It is a slow sync because every record from the handheld had to be copied to the desktop.
Now that you know what to do with records during a sync, let's discuss how to do it.
The Conduit Classes
You may be apprehensive about tackling two-way syncing using the conduit classes provided by Palm (sometimes called basemon and basetabl because of the filenames in which they are located). The implementation may seem murky and the examples quite complicated. If you looked over the samples, you certainly noted that there is no simple example showing how to do what you want to do. Things get even more formidable if you don't want to save your data in the format the base classes provide.
We had all these same apprehensions--many of them were well deserved. At the time this book was written, the documentation wasn't clear concerning the definitions and tasks of each class, nor was it clear what you specifically needed to do to write your own conduit (what methods you are required to override, for instance). The good news is that a detailed examination shows that the architecture of the conduit classes is sound; they do quite a lot for you, and it's not hard to support other file formats once you know what you have to change.
After diving in and working with the conduit classes, we figured out how to use them effectively. In a moment, we will show you their architecture and describe each class and its responsibilities. After that, we show you what is required on your part--what methods you must override and what data members you must set. Next, we show you the code for a complete syncing conduit that supports its own file format.
The Classes and Their Responsibilities
The classes you use to create the conduit are:
- CBaseConduit Monitor
- Runs the entire sync
- CBaseTable
- Creates a table that holds all the records
- CBaseSchema
- Defines the structure of a record in the table
- CBaseRecord
- Creates an object used to read and write each record to the table
- CDTLinkConverter
- Converts the Palm OS record format into a CBaseRecord and vice versa
CBaseConduitMonitor
The CBaseConduitMonitor class is responsible for directing syncing from the start to the end. It does the logging, initializes and deinitializes the sync manager, creates tables, populates them, and decides what records should go where. It is the administrator of the sync. It is also within CBaseConduitMonitor that we add all of the code from the previous chapters that handle the uploading and downloading of data.
CBaseConduitMonitor contains five functions that you need to override:
- CreateTable
- ConstructRecord
- SetArchiveFileExt
- LogRecordData
- LogApplicationName
At this point, we give you a brief description of each function. Later, we'll look at actual code when examining our conduit sample.
CreateTable
You override this to create your class derived from CBaseTable. Here's the function declaration:
long CreateTable(CBaseTable*& pBase);
ConstructRecord
This routine creates your own class derived from CBaseRecord. Here is the function:
long ConstructRecord(CBaseRecord*& pBase,
CBaseTable& rtable, WORD wModAction);
If the incoming
wModActionparameter is equal to
MODFILTER_STUPID, the newly created CBaseRecord object should check any attempted changes to its fields. If the change attempts to set a new value equal to the old value, the CBaseRecord object should just ignore the change, not marking the record as having changed.
SetArchiveFileExt
This function simply sets the filename extension used for archive files. Here is the override call:
void SetArchiveFileExt();
Your override should set the
m_ArchFileExtdata member of CBaseConduitMonitor to a string that will be appended to the category name and used as the filename of the archive.
LogRecordData
This function writes a summary of a record to the log. Here is the function you override:
void LogRecordData(CBaseRecord&rRec, char *errBuf);
Here are the values of the parameters:
-
rRec
- The record to summarize
-
errBuff
- The buffer to write the summary to
This routine is called when the monitor is going to add a log entry for a specific record (for example, when a record has been changed on both the desktop and the handheld). It writes a succinct description of the record, one that enables the user to identify the record.
LogApplicationName
This is the function that returns the name of the conduit:
void LogApplicationName(char* appName, WORD len);
The conduit name is returned into
appName(the
appNamebuffer is
lenbytes long).
CBaseConduitMonitor data member
This class contains one data member that you must initialize:
-
m_wRawRecSize
- Initialize this to the maximum size of a record in your handheld database. It is used as the size of the
m_pBytesfield of the CRawRecordInfo (see ). It is used to read and write handheld database records.
CBaseTable
This class is used to create a table that contains field objects for each record. The whole thing is stored in a large array. Every record contains the same number of fields in the same order. The number of rows in the array is the number of records in the table. The number of columns is the number of fields per record. You should imagine a structure similar to that shown in Figure 13-1.
This is not an array of arrays, but a single large one. The fields are stored in a single-dimensional array, where the fields for the first record are followed by those of the second record and so on. When it's necessary to retrieve the values in a row, a CBaseRecord is positioned at the appropriate fields in the array. It can then read from and write to the fields to effect a change to the row. The table is responsible for reading itself from a file and writing itself out. The default format is an MFC archive format.
TIP:
This type of programming is a bit startling after months of handheld programming, where every byte of memory is precious. It's refreshing to be in an environment where memory is less limited. How profligate to just allocate a field object for every field in every row--all we can say is that its a good thing conduits aren't required to run in 15K of dynamic memory.
A conduit actually has several CBaseTables: one for the records on the handheld, one for the records on the desktop, one for the backup database during a slow sync, and one containing archived records.
Within a table, the data is handled in a straightforward manner and records are frequently copied from one table to another during a sync. While records can be individually deleted, they are normally marked as deleted and then all purged at once.
Functions you must override
This class has only one function to override:
virtual long AppendDuplicateRecord (
CBaseRecord&rFrom, CBaseRecord&rTo, BOOL bAllFlds = TRUE);
Here are the parameters and their values:
-
rFrom
- The record that contains the fields that are copied from.
-
rTo
- A record that contains the fields that get copied to.
-
bAllFlds
- If this is true, the record ID and status should be copied with all the other fields.
This adds a new row of fields.
CBaseTable functions you can't override, but wish you could
There are two other functions that you will often wish to override. The problem is that you can't, given the flawed design of the class. These functions are:
long OpenFrom (CString& rTableName, long openFlag);
long ExportTo (CString& rTableName, CString & csError);
These are the routines responsible for reading a table from and writing a table to disk. Thus, any time you want to use a different file format, you should override these.
Unfortunately, these routines aren't declared virtual in the original Palm class and can't easily be overridden. Since you can't accomplish what you need to in a standard way, you have to use a far less appealing method. See "The problem--virtual reality beats nonvirtual reality" later in this chapter for a description of the unpalatable measures we suggest.
CBaseSchema
This class is responsible for defining the number, the order, and the type of the fields of each record.
Functions you must override
This class contains only one function to override:
virtual long DiscoverSchema(void);
This routine specifies each of the fields and marks which ones store the record ID, attributes, category ID, etc.
CBaseSchema data members
There are several data members that you need to initialize in DiscoverSchema:
-
m_FieldsPerRow
- Initialize to the number of fields in each record.
-
m_Fields
- Call this object's SetSize member function to specify the number of fields in each record. Call the object's SetAt member function for every field from
0to
m_FieldsPerRow-1to specify the type of the field.
-
m_RecordIdPos
- Initialize to the field number containing the record ID.
-
m_RecordStatusPos
- Initialize to the field number containing the status byte.
-
m_CategoryIdPos
- Initialize to the field number containing the category ID.
-
m_PlacementPos
- Initialize to the field number containing the record number on the handheld. If you don't keep track of record numbers, you'll initialize to an empty field.
Most conduits do not need the record numbers from the handheld and therefore have a dummy field that the
m_PlacementPosrefers to. Occasionally, a conduit needs to know the ordering of the records on the handheld. For example, the Memo Pad conduit wants records on the desktop to be displayed in the same order as on the handheld and has no key on which to determine the order. Its solution is to use the ordering of the records in the database as the sort order (no other conduits for the built-in applications use record numbers).
A conduit that needs record numbers would do the following:
- Override ApplyRemotePositionMap (which does nothing by default) to read the record IDs in the order in which they are stored on the handheld.
- Store each record number in the field referenced by
m_PlacementPos.
CBaseRecord
A CBaseRecord is a transitory object that you use to access a record's fields. The fields are stored within the table itself; use the CBaseRecord to read and write data from a specific row within the table. Your derived class should contain utility routines to read and write the data in the record.
Functions you must override
This class contains a couple of functions that you must override:
virtual BOOL operator==(const CBaseRecord&r);
This function compares two records to determine whether they are the same. It should not just compare record IDs or attributes, but should use all the relevant fields in the records. Note that the parameter
ris actually your subclass of CBaseRecord.
Whenever this function is called, the two records are in different tables:
virtual long Assign(const CBaseRecord&r);
This routine copies the contents of
rto this record, including the record ID and attributes. Note that the parameter
ris actually a subclass of CBaseRecord. The two records are in different tables.
Useful functions
There are also several functions that you can use to set the record ID, get or set individual attributes of the record, and so on. Here they are:
long SetRecordId (int nRecId);
long GetRecordId (int& rRecId);
long SetStatus (int nStatus);
long GetStatus (int& rStatus);
long SetCategoryId (int nCatId);
long GetCategoryId (int& rCatId);
long SetArchiveBit (BOOL bOnOff);
BOOL IsDeleted (void);
BOOL IsModified (void);
BOOL IsAdded (void);
BOOL IsArchived (void);
BOOL IsNone (void);
BOOL IsPending (void);
The first set of routines returns information about the record ID, its status, the category ID, and whether the record should be archived. The second set of routines tells you the modified status of the record.
CBaseRecord data members
There are a couple of data members that are available for you to use:
-
m_fields
- This data member is an array of fields for this specific record. It is initialized by the table when the table is focused (so to speak) on the record. Only one record within a table can be focused at a time.
-
m_Positioned
- This specifies whether the table is positioned on this particular record. It starts out false, but when the table focuses on a record, it is set to true.
CDTLinkConverter
This class is responsible for converting from Palm record format to your subclass of CBaseRecord and vice versa.
Functions you must override
long ConvertToRemote(CBaseRecord &rRec, CRawRecordInfo &rInfo);
You use this function to convert from your subclass of CBaseRecord to the CRawRecordInfo. The
rInfo.m_pBytespointer has already been allocated for you. You must write into the buffer and update
rInfo.m_RecSize:
long ConvertFromRemote(CBaseRecord &rRec, CRawRecordInfo &rInfo);
Convert from the CRawRecordInfo to your subclass of CBaseRecord. The
rRecparameter is the subclass of CBaseRecord created by your CBaseTable::CreateRecord. You need to initialize
rRecbased on the values in
rInfo.
Sales Conduit Sample Based on the Classes
Now that you have an idea what each of the classes does and which functions you override, it is time to use this information to add syncing to the Sales application conduit. We use these new sync classes for syncing the customer database. We continue to use our own routines that we created in Chapter 12 to upload the orders database and download the products database.
There is also a problem in the implementation of two of the classes: CBaseConduitMonitor and CBaseTable. We use an unorthodox approach, which involves circumventing normal inheritance and copying the classes by hand. We talk about this as part of our discussion of each of these classes in the sample conduit. Other classes are used normally.
CSalesConduitMonitor--Copying the Class
This is the class that is based on CBaseConduitMonitor. Let's look at a problem we have before going any further.
A virtual conundrum
Our customer database doesn't use categories, but CBaseConduitMonitor expects them to exist. CBaseConduitMonitor's ObtainRemoteCategories function reads the app info block of the handheld database and causes an error if the AppInfo block doesn't exist. In the original class, there were two functions that expected information about categories. The first was SynchronizeCategories, which is responsible for syncing the categories. We overrode this routine to do nothing. Unfortunately, a second function dealing with categories was not declared virtual in the original class and thus could not be overridden. Here is the unseemly bit of code that caused our problem:
long ObtainRemoteCategories (void); // acquire HH Categories
Because of this code, our function ObtainRemoteCategories never gets called, and our conduit fails with an error. After a bit of nail biting, our solution was to re-sort to copy and paste--we copy the basemon.cpp and basemon.h files to our conduit source directory and change the declaration of CBaseConduitMonitor so that ObtainRemoteCategories is virtual.
WARNING:
In a perfect world, you would never have to concern yourself with the following code. It would remain invisible to you. Doing this type of code copy is an action fraught with difficulty. If Palm Computing changes this class, you'll have to reapply this change (unless one of the changes was to add the needed
virtual, in which case you could throw away your changes).
Code we wish you never had to see
Here is the class that you need to copy into your conduit source directory (note that the line of code we change is in bold):
class CBaseConduitMonitor
{
protected:
// code deleted that declares lots of data members
virtual long CreateTable (CBaseTable*& pBase);
virtual long ConstructRecord (CBaseRecord*& pBase,
CBaseTable& rtable,
WORD wModAction);
virtual void SetArchiveFileExt();
// Moved to Base class.
virtual long ObtainRemoteTables(void);// get HH real & archive tables
virtual long ObtainLocalTables (void);// get PC real & archive tables
virtual long AddRecord (CBaseRecord& rFromRec,
CBaseTable& rTable);
virtual long AddRemoteRecord (CBaseRecord& rRec);
virtual long ChangeRemoteRecord (CBaseRecord& rRec);
virtual long CopyRecordsHHtoPC (void); // copy records from HH to PC
virtual long CopyRecordsPCtoHH (void); // copy records from PC to HH
virtual long FastSyncRecords (void); // carries out 'Fast' sync
virtual long SlowSyncRecords (void); // carries out 'Slow' sync
// deleted function
// virtual long CreateLocArchTable (CBaseTable*& pBase);
virtual long SaveLocalTables (const char*);
virtual long PurgeLocalDeletedRecs (void);
virtual long GetLocalRecordCount (void);
virtual long SendRemoteChanges (CBaseRecord& rLocRecord);
virtual long ApplyRemotePositionMap(void);
virtual long SynchronizeCategories (void);
virtual long FlushPCRecordIDs (void);
virtual long ArchiveRecords (void);
// file link related functions
virtual long ProcessSubscription (void);
virtual int GetFirstRecord (CStringArray*& pSubRecord );
virtual int GetNextRecord(CStringArray*& );
virtual int DeleteSubscTableRecs(CString& csCatName,
CBaseTable* pTable, WORD wDeleteOption);
virtual int AddCategory(CString& csCatName, CBaseTable* pTable);
virtual long LogModifiedSubscRec(CBaseRecord* pRecord,
BOOL blocalRec);
virtual long SynchronizeSubscCategories(CCategoryMgr* catMgr);
virtual long CheckFileName(CString& csFileName);
virtual int GetSubData (CString& csfilename, CString csFldOrder );
virtual void AddSubDataToTable( int subCatId);
// Audit trail notifications (optional override)
virtual long AuditAddToPC (CBaseRecord& rRec, long rowOffset);
virtual long AuditUpdateToPC (CBaseRecord& rRec, long rowOffset);
virtual long AuditAddToHH (CBaseRecord& rRec, long rowOffset);
// Overload with care !!
virtual long EngageStandard (void);
virtual long EngageInstall (void);
virtual long EngageBackup (void);
virtual long EngageDoNothing (void);
Code changed here:
// ObtainRemoteCategories changed to virtual Neil Rhodes 8/6/98
virtual long ObtainRemoteCategories (void); // acquire HH Categories
virtual long SaveRemoteCategories (CCategoryMgr *catMgr);
long SaveLocalCategories (CCategoryMgr *catMgr);
long ClearStatusAddRecord (CBaseRecord& rFromRec,
CBaseTable& rTable);
long AllocateRawRecordMemory (CRawRecordInfo& rawRecord, WORD);
void SetDirtyCategoryFlags (CCategoryMgr* catMgr);
void UpdatePCCategories (CUpdateCategoryId *updCatId);
BOOL IsRemoteMemError (long);
BOOL IsCommsError (long);
// Used by FastSync and SlowSync.
virtual long SynchronizeRecord (CBaseRecord & rRemRecord,
CBaseRecord & rLocRecord,
CBaseRecord & rBackRecord);
// code deleted that declares lots of log functions
virtual BOOL IsFatalConduitError(long lError, CBaseRecord *prRec=NULL);
public:
CBaseConduitMonitor (PROGRESSFN pFn,
CSyncProperties&,
HINSTANCE hInst = NULL);
virtual ~CBaseConduitMonitor ();
long Engage (void);
void SetDllInstance (HINSTANCE hInst);
void SetFilelinkSupport (long lvalue){ m_lFilelinkSupported = lvalue; }
// file link public functions
long UpdateTablesOnSubsc(void);
int GetCategories(char categoryNames[][CAT_NAME_LEN] );
};
There seems to be no rhyme or reason as to which functions are declared virtual and which aren't in CBaseConduitMonitor. These are not routines that get called hundreds of thousands of times a sync that never need to be overridden. We can't see any optimization that would warrant making them virtual. There's no excuse for this oversight.
Luckily, that's all that needs to be done; basemon.cpp will need to be recompiled, but that is uncomplicated.
CSalesConduitMonitor
Now we can move on to a discussion of changes we would normally make to our code and standard modifications we make to this class.
CSalesConduitMonitor Class Definition
Within this class, we do a few things. We override the category functions to do nothing. We also override EngageStandard. We insert the calls to our uploading and downloading databases. We also need to override the class functions that every conduit must override. Here is the class definition:
class CSalesConduitMonitor : public CBaseConduitMonitor
{
protected:
// required
long CreateTable (CBaseTable*& pBase);
long ConstructRecord(CBaseRecord*& pBase,
CBaseTable& rtable, WORD wModAction);
void SetArchiveFileExt();
void LogRecordData (CBaseRecord&, char*);
void LogApplicationName (char* appName, WORD);
//overridden to do nothing because we don't have categories
virtual long SynchronizeCategories (void);
virtual long ObtainRemoteCategories(void);
// ovverriden so we can upload and download our other databases
virtual long EngageStandard(void);
public:
CSalesConduitMonitor(PROGRESSFN pFn, CSyncProperties&,
HINSTANCE hInst = NULL);
};
CSalesConduitMonitor constructor
Our constructor allocates a DTLinkConverter and sets the maximum size of handheld records:
CSalesConduitMonitor::CSalesConduitMonitor(
PROGRESSFN pFn,
CSyncProperties& rProps,
HINSTANCE hInst
) : CBaseConduitMonitor(pFn, rProps, hInst)
{
m_pDTConvert = new CSalesDTLinkConverter(hInst);
m_wRawRecSize = 1000; // no record will exceed 1000 bytes
}
Functions that require overriding
There are five functions that we need to override:
- CreateTable
- This function simply creates a CSalesTable:
long CSalesConduitMonitor::CreateTable(CBaseTable*& pBase)
{
pBase = new CSalesTable();
return pBase ? 0 : -1;
}
- ConstructRecord
- This routine creates a new CSalesRecord:
long CSalesConduitMonitor::ConstructRecord(CBaseRecord*& pBase,
CBaseTable& rtable ,
WORD wModAction)
{
pBase = new CSalesRecord((CSalesTable &) rtable, wModAction);
return pBase ? 0 : -1;
}
- SetArchiveFileExt
- Next, we set the suffix for our archive files as ARC.TXT in SetArchiveFileExt. Our archive file is called UnfiledARC.TXT (all our records are category 0, the Unfiled category):
void CSalesConduitMonitor::SetArchiveFileExt()
{
strcpy(m_ArchFileExt, "ARC.TXT");
}
- LogRecordData
- Our LogRecordData summarizes a CSalesRecord to a log:
void CSalesConduitMonitor::LogRecordData(CBaseRecord& rRec,
char * errBuff)
{
// return something of the form " city name, "
CSalesRecord &rLocRec = (CSalesRecord&)rRec;
CString csStr;
int len = 0;
rLocRec.GetCity(csStr);
len = csStr.GetLength() ;
if (len > 20)
len = 20;
strcpy(errBuff, " ");
strncat(errBuff, csStr, len);
strcat(errBuff, ", ");
rLocRec.GetName(csStr);
len = csStr.GetLength() ;
if (len > 20)
len = 20;
strncat(errBuff, csStr, len);
strcat(errBuff, ", ");
strncat(errBuff, csStr, len);
}
- LogApplicationName
- Last, but not least, we need to override the routine LogApplicationName. It returns our conduit's name:
void CSalesConduitMonitor::LogApplicationName(char* appName, WORD len)
{
strncpy(appName, "Sales", len-1);
}
This ends the required routines. There are a few others we override.
The two category routines
We override the two category routines to do nothing. This prevents CBaseConduitMonitor from reading the app info block from the handheld and from actually trying to synchronize categories between the handheld and the desktop:
long CSalesConduitMonitor::ObtainRemoteCategories()
{
return 0;
}
long CSalesConduitMonitor::SynchronizeCategories()
{
return 0;
}
Modifying EngageStandard
Next, we override EngageStandard so that we can call the routines we defined in Chapter 12 for copying orders from the handheld and copying products from the desktop. We have physically copied the inherited code, since we have to place our code in the middle of it. We place it after the conduit is registered with the Sync Manager and the log is started, but before the log is finished and the Sync Manager is closed.
Example 13-1 shows the entire function in all its complexity. We wanted you to see the complexity you avoid by using CBaseConduitMomitor for syncing instead of writing all of this from scratch.
Example 13-1: EngageStandard
long CSalesConduitMonitor::EngageStandard(void)
{
CONDHANDLE conduitHandle = (CONDHANDLE)0;
long retval = 0;
char appName[40];
long pcCount = 0;
WORD hhCount = 0;
Activity syncFinishCode = slSyncFinished;
// Register this conduit with SyncMgr.DLL for communication to HH
if (retval = SyncRegisterConduit(conduitHandle))
return(retval);
// Notify the log that a sync is about to begin
LogAddEntry("", slSyncStarted,FALSE);
memset(&m_DbGenInfo, 0, sizeof(m_DbGenInfo));
// Loop through all possible 'remote' db's
for (; m_CurrRemoteDB < m_TotRemoteDBs && !retval; m_CurrRemoteDB++)
{
// Open the Remote Database
retval = ObtainRemoteTables();
// Open PC tables and load local records && local categories.
if (!retval && !(retval = ObtainLocalTables()))
{
#ifdef _FILELNK
// Process Subscriptions
// This needs to be done first before desktop records are affected
// (e.g.) FlushPCRecordIDs()... which will set all recStatus to Addedreset HH
// (m_firstDevice = eHH)
if (!retval)
if (m_rSyncProperties.m_SyncType != eHHtoPC)
{
retval = ProcessSubscription();
}
#endif
if( !(retval) )
{
FlushPCRecordIDs();
if (!(retval = ObtainRemoteCategories()))
{
// Synchronize the AppInfoBlock Info excluding the categories
m_pDTConvert->SynchronizeAppInfoBlock(m_DbGenInfo,
*m_LocRealTable,
m_rSyncProperties.m_SyncType,
m_rSyncProperties.m_FirstDevice);
// Synchronize the categories
retval = SynchronizeCategories();
}
}
}
// Synchronize the records
if (!retval)
{
#ifdef _FILELNK
// path for subsc info
CString csSubInfoPath(m_rSyncProperties.m_PathName);
csSubInfoPath =csSubInfoPath + SUBSC_FILENAME;
SubError subErr = SubLoadInfo(csSubInfoPath);
#endif
if (m_rSyncProperties.m_SyncType == eHHtoPC)
retval = CopyRecordsHHtoPC();
else if (m_rSyncProperties.m_SyncType == ePCtoHH)
retval = CopyRecordsPCtoHH();
else if (m_rSyncProperties.m_SyncType == eFast)
retval = FastSyncRecords();
else if (m_rSyncProperties.m_SyncType == eSlow)
retval = SlowSyncRecords();
#ifdef _FILELNK
SubSaveInfo(csSubInfoPath);
#endif
}
// If the number of records are not equal after a FastSync or
// SlowSync: If the PC has more records, then do a PCtoHH Sync.
// If the HH has more records, then do a HHtoPC Sync.
if (!retval && ((m_rSyncProperties.m_SyncType == eFast) ||
(m_rSyncProperties.m_SyncType == eSlow)))
{
// Get the record counts
pcCount = GetLocalRecordCount();
if (!(retval = SyncGetDBRecordCount(m_RemHandle, hhCount)))
{
if (pcCount > (long)hhCount)
retval = CopyRecordsPCtoHH();
else if (pcCount < (long)hhCount)
{
m_LocRealTable->PurgeAllRecords();
retval = CopyRecordsHHtoPC();
}
}
}
if (!retval || !IsCommsError(retval))
{
// Re-check the record counts, only if we've obtained rem tables
pcCount = GetLocalRecordCount();
hhCount = 0;
retval = SyncGetDBRecordCount(m_RemHandle, hhCount);
// If the record counts are not equal, send message to the log.
if (pcCount < (long)hhCount)
{
LogRecCountMismatch(pcCount, (long)hhCount);
syncFinishCode = slSyncAborted;
}
else if (pcCount > (long)hhCount)
{
LogPilotFull(pcCount, (long)hhCount);
syncFinishCode = slSyncAborted;
}
}
// This allows exact display order matching with the remote device.
if (!retval || !IsCommsError(retval))
if (ApplyRemotePositionMap())
LogBadXMap();
if (!retval || IsRemoteMemError(retval))
{
// Save all records to be archived to their appropriate files
if (ArchiveRecords())
LogBadArchiveErr();
// Copy PC file to Backup PC file
CString backFile(m_rSyncProperties.m_PathName);
CString dataFile(m_rSyncProperties.m_PathName);
backFile += m_rSyncProperties.m_LocalName;
dataFile += m_rSyncProperties.m_LocalName;
int nIndex = backFile.ReverseFind(_T('.'));
if (nIndex != -1)
backFile = backFile.Left(nIndex);
backFile += BACK_EXT;
// Save synced records to PC file
if (!SaveLocalTables((const char*)dataFile))
{
// Clear HH status flags
if (SyncResetSyncFlags(m_RemHandle))
{
LogBadResetSyncFlags();
syncFinishCode = slSyncAborted;
}
remove(backFile);
CopyFile(dataFile, backFile, FALSE);
}
else
syncFinishCode = slSyncAborted;
}
if (!IsCommsError(retval))
SyncCloseDB(m_RemHandle);
}
//
if (retval)
syncFinishCode = slSyncAborted;
// Get the application name
memset(appName, 0, sizeof(appName));
LogApplicationName(appName, sizeof(appName));
LogAddEntry(appName, syncFinishCode,FALSE);
if (!IsCommsError(retval))
SyncUnRegisterConduit(conduitHandle);
return(retval);
}
These are all the changes to the CSalesConduitMonitor class. As you can see, there was very little complexity to the added code, especially when you realize that most of the difficulty occurs in the last routine, where we have to fold our code into a fairly large routine.
Now that we have dealt with the administration portion of the code, it is time to create the tables that hold the data.
CBaseTable--Copying the Class
We need to create the class that is based on CBaseTable. Before we can define our table structure, however, we need to deal with another class problem. Once again, the solution is to resort to copying the class as a whole, and it is for just as unsatisfying a set of reasons.
The problem--virtual reality beats nonvirtual reality
We want to store our tables in comma-delimited text files rather than in the default MFC archived file format. This is certainly a reasonable wish on our part. It is an even more attractive alternative when you realize that MFC archived files are very hard to read from anything but MFC-based code. We have no desire to create an MFC application just to read our data files, when a text-based system gives us such enormous versatility. Good reasoning on our part is unfortunately difficult to act on.
For example, if we attempt to override CBaseTable::SaveTo and CBaseTable::OpenFrom, we don't get very far. As you might have guessed, those two member functions are not declared
virtualin the original CBaseTable class. We are stuck then with seeking a workaround. The solution to this problem is to copy the basetabl.cpp and basetabl.h files to our conduit's source folder.
The CBaseTable code you have to copy
We need to modify the declaration of CBaseTable to add the
virtualkeywords. Here is the code we copy and the two changes we make:
class TABLES_DECL CBaseTable : public CObject
{
protected:
friend CBaseIterator;
friend CRepeateEventIterator;
friend CBaseRecord;
CString m_TableName;
CString m_TableString;
CBaseSchema* m_Schema;
CBaseFieldArray* m_Fields;
CCategoryMgr* m_pCatMgr;
DWORD m_dwVersion;
BOOL m_bOnTheMac;
// CPtrArray m_OpenIts; // List of open CBaseIterator(s)
// long AddIterator (long& ItPos, CBaseIterator *);
// long RemoveIterator (long ItPos, CBaseIterator *);
BOOL SubstCRwithNL (CString &);
BOOL SubstNLwithCR (CString &);
void Serialize (CArchive &);
long WipeOutRow (long RecPos);
long ReadInFields (CArchive &ar);
long DestroyAllFields (void) ;
void DeleteContents (void) ;
virtual long ConstructProperField (eFieldTypes, CBaseField**);
public:
DECLARE_SERIAL(CBaseTable)
CBaseTable ();
CBaseTable (DWORD dwVersion);
virtual ~CBaseTable ();
// change OpenFrom to virtual
virtual long OpenFrom (CString& rTableName, long openFlag);
long ExportTo (CString& rTableName, CString & csError);
// change SaveTo to virtual
virtual long SaveTo (CString& rTableName);
virtual long Save (void);
virtual long GetRecordCount (void);
virtual long GetFieldCount (void);
virtual BOOL AtEof (long nRecPos);
virtual long AlignFieldPointers (long RecPos, CBaseRecord&);
virtual long GetMySchema (const CBaseSchema*& pSchema);
virtual long PurgeDeletedRecords (void);
virtual long ClearPlacementField (void);
virtual long PurgeAllRecords (void);
virtual long AppendBlankRecord (CBaseRecord&);
virtual long AppendDuplicateRecord (CBaseRecord&,
CBaseRecord&,
BOOL bAllFlds = TRUE);
virtual long GetTableString (CString& rTableString);
virtual long SetTableString (CString& rTableString);
virtual CCategoryMgr* GetCategoryManager (void);
virtual void DumpRecords(LPCSTR lpPathName,BOOL bAppend=TRUE);
};
Don't breathe a sigh of relief just yet--we have a complication. This isn't like the straightforward copying we did with basemon.cpp for CBaseConduitMonitor--copy, link, recompile, and everything works greats. This is a horse of an entirely different color--unlike basemon.cpp, this isn't code that is normally added to your project and linked with your remaining code. Therein lies the wrinkle. This is code that is found in a DLL in the folder with HotSync. Since the DLL is already compiled without the virtual keyword, the DLL won't cooperate by calling our derived class's OpenFrom and SaveTo.
Our solution was to statically link the basetabl.cpp code into our application and not use the table DLL at all. This also required adding the define of the symbol
_TABLESto our project--thereby ensuring that the
TABLES_DECLdefine was no longer defined as
_ _declspec(import). This caused basemon.h to no longer declare the class as being imported from a DLL. Note that the only other choice besides imported for the
TABLES_DECLdefine was
_ _declspec(export). We took what was offered. Unfortunately, the result is that our conduit DLL unnecessarily exports the functions in the CBaseTable class. On the positive side, by these various machinations, we avoid having to change the contents of basetabl.cpp at all.
That is all of the unusual stuff we need to do. Now we can move to more normal overriding.
CSalesTable
We need to handle a number of things in our CBaseTable class. In our definitions, we override the two functions. We also add a couple of new routines to handle the read and write functions.
Class definition
Here's our actual class definition (with OpenFrom and SaveTo overridden). We also include ReadCustomer and WriteRecord, which are utility functions used by OpenFrom and SaveTo:
class CSalesTable : public CBaseTable
{
public:
CSalesTable () ;
// required
virtual long AppendDuplicateRecord (
CBaseRecord&,
CBaseRecord&,
BOOL bAllFlds = TRUE
);
// optional overridden
long OpenFrom(CString& rTableName, long openFlag);
long SaveTo(CString& rTableName);
// useful
CSalesRecord *ReadCustomer(CStdioFile &file);
long WriteRecord(HANDLE hFile, CSalesRecord& rRec);
};
CSalesTable constructor
The constructor creates the schema and initializes it:
CSalesTable::CSalesTable()
: CBaseTable()
{
m_Schema = new CSalesSchema;
if (m_Schema)
m_Schema->DiscoverSchema();
}
CSalesTable functions
AppendDuplicateRecord creates a new row and copies
rFromto
rTo. Note that
rFromand
rToare actually CSalesRecord objects:
long CSalesTable::AppendDuplicateRecord(CBaseRecord& rFrom,
CBaseRecord& rTo, BOOL bAllFlds)
{
int tempInt;
CString tempStr;
long retval = -1;
CSalesRecord& rFromRec = (CSalesRecord&)rFrom;
CSalesRecord& rToRec = (CSalesRecord&)rTo;
// Source record must be positioned at valid data.
if (!rFromRec.m_Positioned)
return -1;
if ((retval = CBaseTable::AppendBlankRecord(rToRec)) != 0)
return retval;
if (bAllFlds) {
retval = rFromRec.GetRecordId(tempInt) ||
rToRec.SetRecordId(tempInt);
if (retval != 0)
return retval;
if ((retval = rFromRec.GetStatus(tempInt)) != 0)
if ((retval = rToRec.SetStatus(tempInt)) != 0)
return retval;
if ((retval = rToRec.SetArchiveBit(rFromRec.IsArchived())) != 0)
return retval;
}
if ((retval = rToRec.SetPrivate(rFromRec.IsPrivate())) != 0)
return retval;
retval = rFromRec.GetID(tempInt) || rToRec.SetID(tempInt);
if (retval != 0)
return retval;
retval = rFromRec.GetName(tempStr) || rToRec.SetName(tempStr);
if (retval != 0)
return retval;
retval = rFromRec.GetAddress(tempStr) || rToRec.SetAddress(tempStr);
if (retval != 0)
return retval;
retval = rFromRec.GetCity(tempStr) || rToRec.SetCity(tempStr);
if (retval != 0)
return retval;
retval = rFromRec.GetPhone(tempStr) || rToRec.SetPhone(tempStr);
if (retval != 0)
return retval;
return 0;
}
This is the only required function. There are also two other functions we override.
SaveTo
Here's our version of SaveTo. We use it to save in a comma-delimited format:
long CSalesTable::SaveTo(CString& rTableName)
{
CSalesRecord locRecord(*this, 0);
CBaseIterator locIterator(*this);
long err;
CString tdvFile(rTableName);
HANDLE tdvFileStream = CreateFile(
tdvFile,
GENERIC_READ | GENERIC_WRITE,
FILE_SHARE_READ | FILE_SHARE_WRITE,
NULL,
CREATE_ALWAYS,
FILE_ATTRIBUTE_NORMAL | FILE_FLAG_SEQUENTIAL_SCAN,
NULL
);
// generate the file
if (tdvFileStream != (HANDLE)INVALID_HANDLE_VALUE) {
SetFilePointer(tdvFileStream, 0, NULL, FILE_BEGIN);
SetEndOfFile(tdvFileStream);
err = locIterator.FindFirst(locRecord, FALSE);
while (!err) {
WriteRecord(tdvFileStream, locRecord);
err = locIterator.FindNext(locRecord, FALSE);
}
if (err == -1) // we reached the last record
err = 0;
CloseHandle(tdvFileStream);
}
return err;
}
It creates the file, opens it, calls WriteRecord to do the actual writing of one record, and then closes the file.
WriteRecord
Note that WriteRecord doesn't write the attributes (modified, deleted, etc.) to a record, because it isn't necessary. By the time we write a table to disk, all deleted records should be deleted, all modified records will be synced, all archived records will be archived, and all added records will be synced. Thus, the attribute information is not relevant:
long CSalesTable::WriteRecord(HANDLE hFile, CSalesRecord& rRec)
{
int customerID;
CString csName, csAddress, csCity, csPhone;
int recId;
DWORD dwPut;
unsigned long len;
const int kMaxRecordSize = 1000;
char buf[kMaxRecordSize];
// Get the record ID
rRec.GetRecordId(recId);
// Get the customer ID, name, address, city & phone.
// Replace any tabs with spaces in all.
rRec.GetID(customerID);
rRec.GetName(csName);
rRec.GetAddress(csAddress);
rRec.GetCity(csCity);
rRec.GetPhone(csPhone);
ReplaceTabs(csName);
ReplaceTabs(csAddress);
ReplaceTabs(csCity);
ReplaceTabs(csPhone);
//",
customerID,
csName.GetBuffer(csName.GetLength()),
csAddress.GetBuffer(csAddress.GetLength()),
csCity.GetBuffer(csCity.GetLength()),
csPhone.GetBuffer(csPhone.GetLength()),
rRec.IsPrivate() ? "P": "",
recId
);
len = strlen(buf);
WriteFile(
hFile,
buf,
len,
&dwPut,
NULL
);
ASSERT(dwPut == len);
// Release the string buffers
csName.ReleaseBuffer();
csAddress.ReleaseBuffer();
csCity.ReleaseBuffer();
csPhone.ReleaseBuffer();
return 0;
}
For each of the strings that will be written (name, address, city, phone), WriteRecord replaces any tabs or newlines with spaces by using ReplaceTabs. This is necessary because it would ruin our tab-delimited format if a tab or newline occurred within a field.
ReplaceTabs
Here's the code for ReplaceTabs:
static long ReplaceTabs(CString& csStr)
{
char *p;
p = csStr.GetBuffer(csStr.GetLength());
// Scan and replace all tabs or newlines with blanks
while (*p) {
if (*p == '\t' || *p == '\r' || *p == '\n')
*p = ' ';
++p;
}
csStr.ReleaseBuffer();
return 0;
}
This is all that needs to be done to handle writing to the file.
OpenFrom
We now need to take care of reading one of these files. OpenFrom does that. It checks for the existence of the file, opens and closes it, and handles any exceptions that are thrown:
long CSalesTable::OpenFrom(CString& rTableName, long openFlag)
{
char *pszName ;
CFileStatus fStat;
CStdioFile *file = 0;
pszName = rTableName.GetBuffer(rTableName.GetLength());
// Check for the presence of the disk file, if not here get out
// *without* invoking any of the reading code.
if (!CStdioFile::GetStatus(pszName, fStat))
return DERR_FILE_NOT_FOUND;
TRY
{
file = new CStdioFile(pszName, CFile::modeReadWrite |
CFile::shareDenyWrite);
rTableName.ReleaseBuffer(-1);
}
CATCH_ALL(e)
{
rTableName.ReleaseBuffer(-1);
if (file)
file->Abort();
delete file;
return ((CFileException*)e)->m_cause;
}
END_CATCH_ALL
// Get rid of current contents (if any)
DestroyAllFields();
CSalesRecord *newRecord = 0;
TRY
{
while ((newRecord = ReadCustomer(*file)) != 0) {
delete newRecord;
newRecord = 0;
}
file->Close();
}
CATCH_ALL(e)
{
file->Abort();
delete file;
delete newRecord;
return DERR_INVALID_FILE_FORMAT;
}
END_CATCH_ALL
delete file;
return 0;
}
ReadCustomer
ReadCustomer has quite a lot of work to do. It creates a new CSalesRecord for each line in the file. It returns 0 when there are no more lines:
CSalesRecord *CSalesTable::ReadCustomer(CStdioFile &file)
{
static char gBigBuffer[4096];
int retval;
if (file.ReadString(gBigBuffer, sizeof(gBigBuffer)) == NULL) = "c";
if (!uniqueID)
uniqueID = "0";
if (customerID && name) {
CSalesRecord *rec = new CSalesRecord(*this, 0);
if (AppendBlankRecord(*rec)) {
// should throw an error here!
return 0;
// return(CONDERR_BAD_REMOTE_TABLES);
}
retval = rec->SetRecordId(atol(uniqueID));
retval = rec->SetCategoryId(0);
retval = rec->SetID(atol(customerID));
retval = rec->SetName(CString(name));
retval = rec->SetAddress(CString(address));
retval = rec->SetCity(CString(city));
retval = rec->SetPhone(CString(phone));
retval = rec->SetPrivate(*priv == 'P');
int attr = 0;
// 'N' -- new, 'M' -- modify, 'D'-- delete, 'A' -- archive
// if it's Add, it can't be modify
if (strchr(attributes, 'N'))
attr |= fldStatusADD;
else if (strchr(attributes, 'M'))
attr |= fldStatusUPDATE;
if (strchr(attributes, 'D'))
attr |= fldStatusDELETE;
if (strchr(attributes, 'A'))
attr |= fldStatusARCHIVE;
rec->SetStatus(attr);
return rec;
} else
return 0;
}
Although WriteRecord doesn't write any attributes, ReadCustomer must handle the possibility of reading them. You might wonder how attributes could have gotten into the file. The answer is simple--the user of the desktop application that edits our comma-delimited file may have changed this record. Since we support desktop editing of records, we need to know if a modification has occurred (for the next sync).
In such instances, the routine appends a value to the end of the record. ReadCustomer adds an
Mas a field at the end. If the record has been deleted, it doesn't remove the record line from the file; instead, it adds a
Din the last field. If the record is archived, it adds an
A, and new records get marked with an
N. On the next sync, all these newly marked records are dealt with by the sync code. Note that the marking is almost completely analogous to the marking done on the handheld side.
CSalesSchema
The schema class defines the number, ordering, and type of the fields. We also declare a number of constants and create one function.
Constants
These constants define the field ordering within a row for the record information we save:
#define slFLDRecordID 0
#define slFLDStatus 1
#define slFLDCustomerID 2
#define slFLDName 3
#define slFLDAddress 4
#define slFLDCity 5
#define slFLDPhone 6
#define slFLDPrivate 7
#define slFLDPlacement 8
#define slFLDCategoryID 9
#define slFLDLast slFLDCategoryID
CSalesSchema class definition
This is very straightforward, with only one function to define:
class CSalesSchema : public CBaseSchema
{
public:
virtual long DiscoverSchema (void);
};
CSalesSchema functions
The DiscoverSchema function must set the number of fields per record, set the type of each record, and mark which fields contain the record ID, the attributes, and the category ID. Even though our Sales application keeps its records sorted by customer number, we are still required to reserve a field for the record number:
long CSalesSchema::DiscoverSchema(void)
{
m_FieldsPerRow = slFLDLast + 1;
m_FieldTypes.SetSize(m_FieldsPerRow);
m_FieldTypes.SetAt(slFLDRecordID, (WORD)eInteger);
m_FieldTypes.SetAt(slFLDStatus, (WORD)eInteger);
m_FieldTypes.SetAt(slFLDCustomerID, (WORD)eInteger);
m_FieldTypes.SetAt(slFLDName, (WORD)eString);
m_FieldTypes.SetAt(slFLDAddress, (WORD)eString);
m_FieldTypes.SetAt(slFLDCity, (WORD)eString);
m_FieldTypes.SetAt(slFLDPhone, (WORD)eString);
m_FieldTypes.SetAt(slFLDPrivate, (WORD)eBool);
m_FieldTypes.SetAt(slFLDPlacement, (WORD)eInteger);
m_FieldTypes.SetAt(slFLDCategoryID, (WORD)eInteger);
// Be sure to set the 4 common fields' position
m_RecordIdPos = slFLDRecordID;
m_RecordStatusPos = slFLDStatus;
m_CategoryIdPos = slFLDCategoryID;
m_PlacementPos = slFLDPlacement;
return 0;
}
CSalesRecord
CSalesRecord is based on the CBaseRecord class. This is the class that deals with records in the table. We have routines that get and set appropriate fields in the record.
CSalesRecord class definition
The constructor takes a
wModActionparameter, which it uses to initialize its base class. Other routines just get and set the values of a customer record:
class CSalesRecord : public CBaseRecord
{
protected:
friend CSalesTable;
public:
CSalesRecord (CSalesTable &rTable,
WORD wModAction);
long SetID (int ID);
long SetName (CString &csName);
long SetAddress (CString &csAddress);
long SetCity (CString &csCity);
long SetPhone (CString &csPhone);
long SetPrivate (BOOL bPrivate);
long GetID (int &ID);
long GetName (CString &csName);
long GetAddress (CString &csAddress);
long GetCity (CString &csCity);
long GetPhone (CString &csPhone);
BOOL IsPrivate (void);
// required overrides
virtual BOOL operator==(const CBaseRecord&r);
virtual long Assign(const CBaseRecord&r);
};
Class constructor
The constructor doesn't do much:
CSalesRecord::CSalesRecord(
CSalesTable &rTable,
WORD wModAction
) : CBaseRecord(rTable, wModAction)
{
}
CSalesRecord functions
There are a number of functions, all of which involve getting or setting records fields. There are routines that get or set the customer ID, name, address, city, and ZIP Code. There are also routines that compare records and assign the values of one record to another.
Getting the customer ID
Here's the routine that gets the value of the customer ID. It gets the appropriately numbered field (checking first to make sure the table is positioned at this record) and asks the field for the current value:
long CSalesRecord::GetID(int &customerID)
{
CIntegerField* pFld;
if (m_Positioned &&
(pFld = (CIntegerField*) m_Fields.GetAt(slFLDCustomerID)) &&
pFld->GetValue(customerID) == 0)
return 0;
else
return DERR_RECORD_NOT_POSITIONED;
}
Setting the customer ID
Here's the routine that sets the customer ID. Note that if
m_wModActionis equal to
MODFILTER_STUPID, the code checks the value being set to see if it is equal to the current value--if it is, the update (modified) attribute of the status isn't set:
long CSalesRecord::SetID(int customerID)
{
BOOL autoFlip = FALSE;
int currStatus = 0;
long retval = DERR_RECORD_NOT_POSITIONED;
CIntegerField* pFld = NULL;
if (m_Positioned &&
(pFld = (CIntegerField*) m_Fields.GetAt(slFLDCustomerID)))
{
if (m_wModAction == MODFILTER_STUPID)
{
GetStatus(currStatus);
if (currStatus != fldStatusADD)
{
CIntegerField tmpFld(customerID);
if (pFld->Compare(&tmpFld))
autoFlip = TRUE;
}
}
if (!pFld->SetValue(customerID))
{
if (autoFlip)
SetStatus(fldStatusUPDATE);
retval = 0;
}
}
return retval;
}
Because the routines to get and set the name, address, city, and ZIP, and private value are so similar to those for the customer ID, we are not bothering to show them.
Assigning one record to another
We need an assign function that assigns one CSalesRecord to another. It copies all fields, including the record ID and attributes:
long CSalesRecord::Assign(const CBaseRecord& rSubj)
{
if (!m_Positioned)
return -1;
for (int x=slFLDRecord)
pMyFld->Assign(*pSubjFld);
}
return 0;
}
Comparing one record to another
The comparison routine (== operator) checks to see whether one CSalesRecord is equal to another (ignoring record ID and attributes):
BOOL CSalesRecord::operator==(const CBaseRecord& rSubj)
{
if (!m_Positioned)
return FALSE;
for (int x=slFLDCustomer)
return FALSE;
if (pMyFld->Compare(pSubjFld) != 0)
return FALSE;
}
return TRUE;
}
CSalesDTLinkConverter
This is the last class that we have in our conduit. It is the one responsible for converting a record from one format to another and vice versa. We have one function that converts a Palm OS handheld record into a CBaseRecord format, and another does the opposite.
CSalesDTLinkConverter class definition
The definition is simple with just two functions:
class CSalesDTLinkConverter : public CBaseDTLinkConverter
{
public:
CSalesDTLinkConverter(HINSTANCE hInst);
long ConvertToRemote(CBaseRecord &rRec, CRawRecordInfo &rInfo);
long ConvertFromRemote(CBaseRecord &rRec, CRawRecordInfo &rInfo);
};
The
HINSTANCEparameter in the constructor is there so that the converter can obtain strings from the DLL resource file, if it needs to.
CSalesDTLinkConverter constructor
Here's the constructor:
CSalesDTLinkConverter::CSalesDTLinkConverter(HINSTANCE hInst)
: CBaseDTLinkConverter(hInst)
{
}
Converting to Palm record format
Here's the code that converts to a handheld record. Note that it must set the record ID, the category ID, and the attributes as well as write the record contents. We use a utility routine, SwapDWordToMotor, to swap the customer ID:
long CSalesDTLinkConverter::ConvertToRemote(CBaseRecord& rRec,
CRawRecordInfo& rInfo)
{
long retval = 0;
char *pBuff;
CString tempStr;
int destLen, tempInt;
char *pSrc;
int customerID;
CSalesRecord& rExpRec = (CSalesRecord &)rRec;
rInfo.m_RecSize = 0;
// Convert the record ID and Category ID
retval = rExpRec.GetRecordId(tempInt);
rInfo.m_RecId = (long)tempInt;
retval = rExpRec.GetCategoryId(tempInt);
rInfo.m_CatId = tempInt;
// Convert the attributes
rInfo.m_Attribs = 0;
if (rExpRec.IsPrivate())
rInfo.m_Attribs |= PRIVATE_BIT;
if (rExpRec.IsArchived())
rInfo.m_Attribs |= ARCHIVE_BIT;
if (rExpRec.IsDeleted())
rInfo.m_Attribs |= DELETE_BIT;
if (rExpRec.IsModified() || rExpRec.IsAdded())
rInfo.m_Attribs |= DIRTY_BIT;
pBuff = (char*)rInfo.m_pBytes;
// customer ID
retval = rExpRec.GetID(customerID);
*((DWORD *)pBuff) = SwapDWordToMotor(customerID);
pBuff += sizeof(DWORD);
rInfo.m_RecSize += sizeof(DWORD);
// name
retval = rExpRec.GetName;
// address
retval = rExpRec.GetAddress;
// city
retval = rExpRec.GetCity;
// phone
retval = rExpRec.GetPhone;
return retval;
}
Converting to CBaseRecord format
Here's the code that converts from a handheld record to a CBaseRecord format. Note that it must read the record ID, the category ID, the attributes, and the record contents. We use a utility routine, SwapDWordToIntel, to swap the customer ID. If the record is deleted, there are no record contents. We don't try to read the record contents in such cases.
long CSalesDTLinkConverter::ConvertFromRemote(
CBaseRecord& rRec,
CRawRecordInfo& rInfo)
{
long retval = 0;
char *pBuff;
CString aString;
CSalesRecord& rExpRec = (CSalesRecord &)rRec;
retval = rExpRec.SetRecordId(rInfo.m_RecId);
retval = rExpRec.SetCategoryId(rInfo.m_CatId);
if (rInfo.m_Attribs & ARCHIVE_BIT)
retval = rExpRec.SetArchiveBit(TRUE);
else
retval = rExpRec.SetArchiveBit(FALSE);
if (rInfo.m_Attribs & PRIVATE_BIT)
retval = rExpRec.SetPrivate(TRUE);
else
retval = rExpRec.SetPrivate(FALSE);
retval = rExpRec.SetStatus(fldStatusNONE);
if (rInfo.m_Attribs & DELETE_BIT) // Delete flag
retval = rExpRec.SetStatus(fldStatusDELETE);
else if (rInfo.m_Attribs & DIRTY_BIT) // Dirty flag
retval = rExpRec.SetStatus(fldStatusUPDATE);
// Only convert body if remote record is *not* deleted..
if (!(rInfo.m_Attribs & DELETE_BIT))
{
pBuff = (char*)rInfo.m_pBytes;
// Customer ID
long customerID = SwapDWordToIntel(*((DWORD*)pBuff));
retval = rExpRec.SetID(customerID);
pBuff += sizeof(DWORD);
// Name
AddCRs(pBuff, strlen(pBuff));
aString = m_TransBuff;
retval = rExpRec.SetName(aString);
pBuff += strlen(pBuff) + 1;
// Address
AddCRs(pBuff, strlen(pBuff));
aString = m_TransBuff;
retval = rExpRec.SetAddress(aString);
pBuff += strlen(pBuff) + 1;
// City
AddCRs(pBuff, strlen(pBuff));
aString = m_TransBuff;
retval = rExpRec.SetCity(aString);
pBuff += strlen(pBuff) + 1;
// Phone
AddCRs(pBuff, strlen(pBuff));
aString = m_TransBuff;
retval = rExpRec.SetPhone(aString);
pBuff += strlen(pBuff) + 1;
}
return retval ;
}
The DLL
The one remaining piece in our puzzle is the DLL where the CSalesConduitMonitor actually gets created.
DLL OpenConduit
DLL's OpenConduit is where we put the conduit creation code:
_ _declspec(dllexport) long OpenConduit(PROGRESSFN pFn,
CSyncProperties& rProps)
{
AFX_MANAGE_STATE(AfxGetStaticModuleState());
long retval = -1;
rProps.m_DbType = 'Cust';// in case it needs to be created
if (pFn) {
CSalesConduitMonitor* pMonitor;
pMonitor = new CSalesConduitMonitor(pFn, rProps, myInst);
if (pMonitor)
{
retval = pMonitor->Engage();
delete pMonitor;
}
}
return retval;
}
Note that we set the
m_DbTypefield of
rProps. We do this so that CBaseConduitMonitor will create the customer database on the handheld if it doesn't exist; it uses the type found in
rProps.m_DbTypeto do the job.
We also pass our DLL's instance,
myInst, as the third parameter. It is used to retrieve resource strings. The instance is stored as a global variable, along with three others:
static int ClientCount = 0;
static HINSTANCE hRscInstance = 0;
static HINSTANCE hDLLInstance = 0;
HINSTANCE myInst=0;
These globals are initialized when the DLL is opened.
DLL class definition
Here's our DLL's class declaration (as created automatically by Visual C++):
class CSalesCondDll : public CWinApp
{
public:
//CSalesCondDll();
virtual BOOL InitInstance(); // Initialization
virtual int ExitInstance(); // Termination
// Overrides
// ClassWizard generated virtual function overrides
//{{AFX_VIRTUAL(CSalesCondDll)
//}}AFX_VIRTUAL
//{{AFX_MSG(CSalesCondDll)
// NOTE - the ClassWizard will add/remove member functions here.
// DO NOT EDIT what you see in these blocks of generated code !
//}}AFX_MSG
DECLARE_MESSAGE_MAP()
};
Initializing function
InitInstance must initialize the table's DLL. This contains some field functions beyond those in the basetable.cpp file. It must also initialize the PDCmn DLL, which contains some resources for the dialog shown in ConfigureConduit:
BOOL CSalesCondDll::InitInstance()
{
// DLL initialization
TRACE0("SALESCOND.DLL initializing\n");
if (!ClientCount ) {
hDLLInstance = AfxGetInstanceHandle();
hRscInstance = hDLLInstance;
// add any extension DLLs into CDynLinkLibrary chain
InitTables5DLL();
InitPdcmn5DLL();
}
myInst = hRscInstance;
ClientCount++;
return TRUE;
}
Exit function
We also need an ExitInstance :
int CSalesCondDll::ExitInstance()
{
TRACE0("SALESCOND.DLL Terminating!\n");
// Check for last client and clean up potential memory leak.
if (--ClientCount <= 0)
{
PalmFreeLanguage(hRscInstance, hDLLInstance);
hRscInstance = hDLLInstance;
}
// DLL clean up, if required
return CWinApp::ExitInstance();
}
DLL resources
There are a variety of strings that the Conduit Manager loads from resources (including all the logging strings). These strings have to be stored within our DLL. In our resource file, SalesCond.rc, we don't have any explicit resources. Instead, in the Resource Includes panel, we add a compile-time directive:
#include "..\include\Res\R_English\Basemon.rc"
This makes all the standard basemon resource strings part of our DLL.
Testing the Conduit
Before testing, make sure you use CondCfg.exe to register the Remote Database name for the Sales conduit as "Customers-Sles". This is what tells your conduit what database to sync.
There are some good tests you can perform to ensure that your conduit is working properly:
- Sync having never run your application
- Your database(s) won't yet exist. This simulates a user syncing after installing your software but before using it. If your conduit performs correctly, any data from the desktop should be copied to the handheld.
- Sync having run your application once
- Do this test after first deleting your databases. This simulates a user syncing after installing your software and using it. If everything works as expected, data from the handheld should be copied to the desktop.
- Add a record on the handheld and sync
- Make sure the new record gets added to the desktop.
- Add a record on the desktop and sync
- Make sure the new record gets added to the handheld.
- Delete a record on the handheld and sync
- Make sure the record gets deleted from the desktop.
- Delete a record on the desktop and sync
- Make sure the record gets deleted from the handheld.
- Archive a record on the handheld
- Make sure the record gets deleted from the main desktop file and gets added to the archive.
There are other tests you can make, but these provide the place to begin.
Generic Conduit
Generic Conduit is the other approach to creating a conduit that handles two-way syncing. It is based on a new set of classes (currently unsupported) that Palm Computing has recently started distributing. Having seen all that is involved in creating a conduit based on the basemon and basetabl classes, you can understand why Palm Computing wanted to offer a simpler solution to developers. Generic Conduit is one of Palm's solutions to this problem--these classes are intended to make it easier to get a conduit up and running.
Advantages of Using Generic Conduit
There are some powerfully persuasive advantages to basing a conduit on these new classes:
- In some cases, you don't need to write any code
- Generic Conduit contains everything, including ConfigureConduit, GetConduitName, etc. If you compile and register it, it'll be happy to two-way sync your Palm database to a file on the desktop. This approach requires the use of its own file format, however. If you don't like that format, you need to customize the Generic Conduit classes to some extent.
- If you do have to write code, it might not be much
- The number of classes and the number of methods are much less daunting than those found in the basemon and basetabl classes.
- All the source code is available
- The entire source code is provided; you don't have to rely on any DLLs (basemon uses Tables.DLL for the CBaseTable class and MFC for serialization). Further, if you so desire, you can change any or all of the source code.
- There's less work involved in handling records
- Generic Conduit is unlike basemon, which has a schema and attempts to represent your record as fields in memory. Generic Conduit treats your record as just a sequence of bytes. Thus, records are copied from the handheld to the desktop and left untouched; the default file format stores them as is. Record comparison is accomplished by comparing all the bytes in each record to see if they are identical. This is a far cry from basemon's approach, which represents records in memory as fields and does field-by-field comparison.
- The approach to conduit creation makes more sense
This Generic Conduit approach makes a great deal of sense. All that's needed for synchronization to work correctly is to compare two records to see whether they are the same or different. There's no need to know what fields exist or anything else; you just compare the bytes.
Disadvantage to Using Generic Conduit
There is a disadvantage to this approach. The good news is it will possibly fade over time:
- It's new
- The Basemon classes are used for Palm's shipping conduits, and by numerous third parties. That means they work very well and, presumably, most of the bugs have already been found and fixed. If you're an early user of generic conduits, you are at risk for as yet unfound bugs of who knows what nature.
Generic Conduit Classes
There are eight classes that affect your use of Generic Conduit. As might be expected, each has a different responsibility. Figure 13-2 shows you the inheritance relationship..
Now let's look at what each class does.
CPalmRecord
This represents a Palm record; it stores attributes, a unique ID, a category number, a record length, and a pointer to the raw record data.
CDbManager
This is the class that is responsible for a database. It defines methods for iterating through the records, adding and deleting records, etc. As you can see from Figure 13-2, it is also an abstract class; there are four derived classes that implement these methods.
CHHMgr
This class is derived from CDbManager, and implements the CDbManager member functions by using the Sync Manager. This concrete subclass uses the interface of the abstract class. It can be used just like any other database, but its implementation is different. For example, its method to add a record is implemented using SyncWriteRec.
CPcMgr
This class implements the CdbManager member functions for a file on the desktop. When a file is opened, it reads all the records into memory and stores them in a list. Changes to the database are reflected in memory until the database is closed; at that point, the records are rewritten to the database.
You'll often create your own derived class of CPcMgr and override the functions, RetrieveDB and StoreDB, to read and write your own file formats.
CArchiveDatabase
This class is derived from CPcMgr. It is responsible for handling the archive file on the desktop.
CBackupMgr
This class is also derived from CPcMgr. As you might have guessed, it is responsible for the backup file on the desktop.
CPLogging
This class is responsible for logging when any type of a failure that occurs during syncing.
CSynchronizer
This class is responsible for handling the actual synchronization. It creates the database classes and manages the entire process (it has many of the same duties as CBaseConduitMonitor). You will often override one of its member functions, CreateDBManager, to create your own class derived from CPcMgr.
Amazing as it may seem that is all there is worth noting about the Generic Conduit classes. Now, let's turn to the code based on generic conduit that we create for the Sales application conduit.
Sales Conduit Based on Generic Conduit
This sample was created using the version of the Generic Conduit found in the Conduit Development Kit 4.0. We used the Palm Conduit Wizard from Visual C++ to create the initial conduit. We chose the following options:
- What kind of conduit do you want ?
- Generic.
- Which App on the organizer will your conduit interface with?
- Generic (will sync any app). Subclass PCMgr (allows custom file format).
- What type of data transfer between the desktop and the organizer will your conduit provide?
- Two way mirror image syncrhonization.
- What features would you like?
- Archiving, Sync action configuration dialog.
As we look at the source code of the conduit, we'll look only at the classes and functions we modified. The others that the Wizard generated we can use without modification.
CSalesPCMgr
We have derived a new class from CPCMgr, because we want to support the same a tab-delimited text format we used with the earlier conduit classes. Here is our new class declaration, which was created initially by the Wizard, and to which we then added a few member functions:
class CSalesPcMgr : public CPcMgr
{
public:
CSalesPcMgr(CPLogging *pLogging, DWORD dwGenericFlags, char *szDbName,
TCHAR *pFileName = NULL,
TCHAR *pDirName = NULL,
eSyncTypes syncType = eDoNothing);
virtual ~CSalesPcMgr();
virtual long RetrieveDB(void);
virtual long StoreDB(void);
protected:
long ReadString(char *buffer, long size);
bool ReadRecord(CPalmRecord &rec);
long WriteRecord(CPalmRecord *pPalmRec);
};
StoreDB Function
Our StoreDB routine writes out the list of records in text-delimited format:
long CSalesPcMgr::StoreDB(void)
{
if ( !m_bNeedToSave) { // if no changes, don't waste time saving
return 0;
}
long retval = OpenDB();
if (retval)
return GEN_ERR_UNABLE_TO_SAVE;
for (DWORD dwIndex = 0; (dwIndex < m_dwMaxRecordCount) && (!retval); dwIndex++){
if (!m_pRecordList[dwIndex]) // if there is no record, skip ahead
continue;
retval = WriteRecord(m_pRecordList[dwIndex]);
if (retval != 0){
CloseDB();
return GEN_ERR_UNABLE_TO_SAVE;
}
}
CloseDB();
m_bNeedToSave = FALSE;
return 0;
}
It calls WriteRecord which writes line by line.
WriteRecord
This writes the record:
long CSalesPcMgr::WriteRecord(CPalmRecord *pPalmRec)
{
unsigned long len;
const int kMaxRecordSize = 1000;
char buf[kMaxRecordSize];
char rawRecord[kMaxRecordSize];
DWORD recordSize = kMaxRecordSize;
long retval;
retval = pPalmRec->GetRawData((unsigned char *) rawRecord,
&recordSize);
if (retval) {
return retval;
}
Customer *aCustomer = RawRecordToCustomer(rawRecord);
//",
aCustomer->customerID,
aCustomer->name,
aCustomer->address,
aCustomer->city,
aCustomer->phone,
pPalmRec->IsPrivate() ? "P": "",
pPalmRec->GetID()
);
len = strlen(buf);
retval = WriteOutData(buf, strlen(buf));
delete aCustomer;
return retval;
}
It calls RawRecordToCustomer, which converts the bytes in a record to a customer:
Customer *RawRecordToCustomer(void *rec)
{
Customer *c = new Customer;
PackedCustomer *pc = (PackedCustomer *) rec;
c->customerID = SwapLong(pc->customerID);
char * p = (char *) pc->name;
c->name = new char[strlen(p)+1];
strcpy(c->name, p);
p += strlen(p) + 1;
c->address = new char[strlen(p)+1];
strcpy(c->address, p);
p += strlen(p) + 1;
c->city = new char[strlen(p)+1];
strcpy(c->city, p);
p += strlen(p) + 1;
c->phone = new char[strlen(p)+1];
strcpy(c->phone, p);
return c;
}
Retrieving a Database
We also have a function, RetrieveDB, that takes a text file, reads it, creates records from it, and then adds each record to the list maintained by the CPcMgr.
long CSalesPcMgr::RetrieveDB(void)
{
long err = 0;
m_bNeedToSave = FALSE;
if (!_tcslen(m_szDataFile))
return GEN_ERR_INVALID_DB_NAME;
if (m_hFile == INVALID_HANDLE_VALUE)
return GEN_ERR_INVALID_DB;
CPalmRecord newRecord;
while (ReadRecord(newRecord)) {
AddRec(newRecord);
}
return 0;
}
Reading Customer Information
The previous routine relies on a utility routine that we need to write. This routine simply reads in a tab-delimited text file and turns it into a Palm record:
bool CSalesPcMgr::ReadRecord(CPalmRecord &rec)
{
static char gBigBuffer[4096];
if (ReadString(gBigBuffer, sizeof(gBigBuffer)) != 0) = "";
if (!uniqueID)
uniqueID = "0";
if (customerID && name) {
rec.SetID(atol(uniqueID));
rec.SetIndex(-1);
rec.SetCategory(0);
rec.SetPrivate(*priv == 'P');
// 'N' -- new, 'M' -- modify, 'D'-- delete, 'A' -- archive
// if it's Add, it can't be modify
rec.ResetAttribs();
if (strchr(attributes, 'N'))
rec.SetNew();
else if (strchr(attributes, 'M'))
rec.SetUpdate();
if (strchr(attributes, 'D'))
rec.SetDeleted();
if (strchr(attributes, 'A'))
rec.SetArchived();
static char buf[4096];
PackedCustomer *pc = (PackedCustomer *) buf;
pc->customerID = SwapLong(atol(customerID));
char *p = (char *) pc->name;
strcpy(p, name);
p += strlen(p) + 1;
strcpy(p, address);
p += strlen(p) + 1;
strcpy(p, city);
p += strlen(p) + 1;
strcpy(p, phone);
p += strlen(p) + 1;
rec.SetRawData(p - buf, (unsigned char *) buf);
return true;
} else
return false;
}
Reading a Line at a Time
Reading the customer information requires a way to read the text file line-by-line. ReadString does that for us:
long CSalesPcMgr::ReadString(char *buffer, long size)
{
long err;
long i;
for (i = 0; i < size-1; i++) {
err = ReadInData((unsigned char *) buffer + i, 1);
if (err)
break;
if (i > 0 && buffer[i-1] == '\r' && buffer[i] == '\n') {
buffer[i-1] = '\0';
return 0;
}
}
buffer[i] = '\0';
// last line may not be null-terminated
if (err == GEN_ERR_STORAGE_EOF && i > 0)
err = 0;
return err;
}
CSalesSync
CSalesSync is a class derived from CSynchronizer that is created by the Wizard. It is responsible for instantiating CSalesPcMgr, CSalesBackupMgr, and CSalesArchiveMgr (the latter two were created by the Wizard, and required no changes from us). We modified the class because we need to copy our Orders database from the handheld, and our Products database to the handheld.
Unlike the basemon case, we don't have do perform a bunch of copying tricks to handle a simple override. Everything works as you would want it to work.
Class Definition
Here's our declaration of CSalesSync:
class CSalesSync : public CSynchronizer
{
public:
CSalesSync(CSyncProperties& rProps, DWORD dwDatabaseFlags);
virtual ~CSalesSync();
virtual long Perform(void);
protected:
virtual CPDbBaseMgr *CreateArchiveManager(TCHAR *pFilename);
virtual long CreatePCManager(void);
virtual long CreateBackupManager(void);
};
Modifying Perform To Add Uploading and Downloading Products and Orders
As we found in the basemon case, there's a fairly large routine that opens the conduit, does the appropriate kind of syncing, and closes the conduit. We need to insert our code to copy the Products database and Orders database in there. We've copied that routine and inserted our code (our added code is bold).
long CSalesSync::Perform(void)
{
long retval = 0;
long retval2 = 0;
if (m_rSyncProperties.m_SyncType > eProfileInstall)
return GEN_ERR_BAD_SYNC_TYPE;
if (m_rSyncProperties.m_SyncType == eDoNothing) {
return 0;
}
// Obtain System Information
m_SystemInfo.m_ProductIdText = (BYTE*) new char [MAX_PROD_ID_TEXT];
if (!m_SystemInfo.m_ProductIdText)
return GEN_ERR_LOW_MEMORY;
m_SystemInfo.m_AllocedLen = (BYTE) MAX_PROD_ID_TEXT;
retval = SyncReadSystemInfo(m_SystemInfo);
if (retval)
return retval;
retval = RegisterConduit();
if (retval)
return retval;
for (int iCount=0; iCount < m_TotRemoteDBs && !retval; iCount++) {
retval = GetRemoteDBInfo(iCount);
if (retval) {
retval = 0;
break;
}
switch (m_rSyncProperties.m_SyncType) {
case eFast:
retval = PerformFastSync();
if ((retval) && (retval == GEN_ERR_CHANGE_SYNC_MODE)){
if (GetSyncMode() == eHHtoPC)
retval = CopyHHtoPC();
else if (GetSyncMode() == ePCtoHH)
retval = CopyPCtoHH();
}
break;
case eSlow:
retval = PerformSlowSync();
if ((retval) && (retval == GEN_ERR_CHANGE_SYNC_MODE)){
if (GetSyncMode() == eHHtoPC)
retval = CopyHHtoPC();
else if (GetSyncMode() == ePCtoHH)
retval = CopyPCtoHH();
}
break;
case eHHtoPC:
case eBackup:
retval = CopyHHtoPC();
break;
case eInstall:
case ePCtoHH:
case eProfileInstall:
retval = CopyPCtoHH();
break;
case eDoNothing:
break;
default:
retval = GEN_ERR_SYNC_TYPE_NOT_SUPPORTED;
break;
}
DeleteHHManager();
DeletePCManager();
DeleteBackupManager();
CloseArchives();
}
//
// Unregister the conduit
retval2 = UnregisterConduit((BOOL)(retval != 0));
if (!retval)
return retval2;
return retval;
}
Creating the Conduit
The Wizard generates an OpenConduit entry point in our DLL that creates our CSalesSync and then calls it's Perform function to do the work of synchronization. Since our application doesn't use an app info block, we modify the generated code to pass 0 as the second parameter to the constructor of CSalesSync, so that it won't try to synchronize that app info block.
ExportFunc long OpenConduit(PROGRESSFN pFn, CSyncProperties& rProps)
{
long retval = -1;
if (pFn)
{
CSalesSync* pGeneric;
pGeneric = new CSalesSync(rProps, 0);
if (pGeneric){
retval = pGeneric->Perform();
delete pGeneric;
}
}
return(retval);
}
At this point, we can test the code. It works just as the basemon version did, so we will use the same tests.
As you can see, Generic Conduit makes the task of supporting two-way mirror image syncing much easier. It is simpler to derive classes, since there are no real problems with functions that should be virtual that are not. In either case, we hope that it is clearer how to add support for two-way syncing after this description.
Back to: Palm Programming: The Developer's Guide
© 2001, O'Reilly & Associates, Inc.
webmaster@oreilly.com
|
http://oreilly.com/catalog/palmprog/chapter/ch13.html
|
crawl-002
|
refinedweb
| 11,563
| 52.8
|
Abstract base class for capturing sound data. More...
#include <SoundRecorder.hpp>
Abstract base class for capturing sound data.
sf::SoundBuffer provides a simple interface to access the audio recording capabilities of the computer (the microphone).
As an abstract base class, it only cares about capturing sound samples, the task of making something useful with them is left to the derived class. Note that SFML provides a built-in specialization for saving the captured data to a sound buffer (see sf::SoundBufferRecorder).
A derived class has only one virtual function to override:
Moreover, two additional virtual functions can be overridden as well if necessary:
A derived class can also control the frequency of the onProcessSamples calls, with the setProcessingInterval protected function. The default interval is chosen so that recording thread doesn't consume too much CPU, but it can be changed to a smaller value if you need to process the recorded data in real time, for example.
The audio capture feature may not be supported or activated on every platform, thus it is recommended to check its availability with the isAvailable() function. If it returns false, then any attempt to use an audio recorder will fail.
If you have multiple sound input devices connected to your computer (for example: microphone, external soundcard, webcam mic, ...) you can get a list of all available devices through the getAvailableDevices() function. You can then select a device by calling setDevice() with the appropriate device. Otherwise the default capturing device will be used.
It is important to note that the audio capture happens in a separate thread, so that it doesn't block the rest of the program. In particular, the onProcessSamples virtual function (but not onStart and not onStop) will be called from this separate thread. It is important to keep this in mind, because you may have to take care of synchronization issues if you share data between threads.
Usage example:
Definition at line 45 of file SoundRecorder.hpp.
destructor
Default constructor.
This constructor is only meant to be called by derived classes.
Get a list of the names of all available audio capture devices.
This function returns a vector of strings, containing the names of all available audio capture devices..
This virtual function is called every time a new chunk of recorded data is available. The derived class can then do whatever it wants with it (storing it, playing it, sending it over the network, etc.).
Implemented in sf::SoundBufferRecorder.
Start capturing audio data.
This virtual function may be overridden by a derived class if something has to be done every time a new capture starts. If not, this function can be ignored; the default implementation does nothing.
Reimplemented in sf::SoundBufferRecorder.
Stop capturing audio data.
This virtual function may be overridden by a derived class if something has to be done every time the capture ends. If not, this function can be ignored; the default implementation does nothing.
Reimplemented in sf::SoundBuffer().
|
https://www.sfml-dev.org/documentation/2.3/classsf_1_1SoundRecorder.php
|
CC-MAIN-2018-05
|
refinedweb
| 491
| 56.25
|
Quote from: Coding Badly on Jun 16, 2013, 07:03 amThat actually uses SoftwareSerial for the relay. Would you prefer one of the other USARTs like Serial1?Yes, I noticed that. But isn't why you are using SoftwareSerial because you can direct it to the MISO pin?
That actually uses SoftwareSerial for the relay. Would you prefer one of the other USARTs like Serial1?
If you use the USART then you would have to wire additional connections from the target?
SoftwareSerial seems to be working okay, what would be the benefits of using a USART? More speed options? Better accuracy?
You're not using a capacitor to disable auto-reset?
Out of habit, I exit Monitor when I finish but doing that should not actually be necessary. There is code in TinyISP that automatically switches from Monitor to Programmer.
Did you remember to Burn Bootloader so the target is actually running at the correct speed?
Yes, I didn't think I needed to disable auto-reset for Serial Monitor mode, but I guess that makes sense. (After all I did not have to disable auto-reset for ISP programming.)
Okay, I didn't know about that, Great feature!
So when I exit Monitor mode, it keeps on running after I close Serial Monitor, until I try an upload using programmer or burnbootloader?
0x11 is fault_timeout_knock which occurs when the pin-change interrupt fires, a knock is detected, but the knock duration is (way) too long. Loose wire, target at the wrong speed (running at 1 MHz; sketch built for 8 MHz), or the programmer is running significantly faster than 16 MHz.0x04 is fault_no_knock which occurs when the pin-change interrupt fires and no knock is detected. Loose wire, target at the wrong speed (running at 8 MHz; sketch built for 1 MHz), or side-effect of a previous failure.0x13 is fault_timeout_sample_0 which occurs when the first timing mark is not detected. Loose wire, target running at the wrong speed, or side-effect of a previous failure.
Wow. You are actually able to program a target with a 10 second bootloader time-out?Your experience will be greatly improved by disabling auto-reset!
as far as wiring goes, we are just using MISO/MOSI/SCK/RESET with a USB cable from mega to pc as you would for ISP'ing
#include <TinyDebugKnockBang.h>int led = 3;void setup( void ){ Debug.begin( 250000 ); pinMode(led, OUTPUT);}void loop( void ){ digitalWrite(led, HIGH); delay(1000); digitalWrite(led, LOW); delay(1000); Debug.println( F( "Caitlin! " ) );// delay( 1000 );}
Anything extra connected to MISO? LED? SPI device?
however if i'm doing anything useful in my sketch (some counting, analogWrite, random, delay....) knockbang seems to sit there doing nothing, gets stuck at monitor startin.....
The first version of TinyISP was made for the m328 processor when Knock-Bang did not exist. Serial was the only way to send debugging information. At that point it made sense to use SoftwareSerial with MISO as the RX pin.The new idea is to use Knock-Bang for debugging and Serial for data. As I port TinyISP to other processors I have always used a USART if I could.
This could be avoided also if the Mega2560 bootloader did something similar to Optiboot, where it will skip out of bootloader if the wrong baud rate is detected?
That sounds interesting. So Knock-Bang will continue to use MISO for debugging and you would also relay Serial via the USART?
They both show up on Serial Monitor?
Are you also implying two way with Serial?
I have an Uno with the framing-error-quick-escape feature. I had problems getting it to work as a programmer. The capacitor makes a significant positive difference. It is worth the 10 cent investment.
|
http://forum.arduino.cc/index.php?topic=123388.150
|
CC-MAIN-2015-27
|
refinedweb
| 632
| 58.48
|
I... blog post, Zach Smith shows you how to use this functionality to send e-mail alerts when errors are written to the event log.
Accessing the event log
The first step to this solution is simply having access to the event log. Event logs are named entities and are accessed based on the name. For instance, your computer probably already has event logs named Application, System, and Security. These are the default logs that are included with Windows; however, custom logs can also be created. We will use the Application log as an example.The .NET Framework contains an object called EventLog in the System.Diagnostics namespace. This object is responsible for our communication to / from a given event log. For example, to instantiate an EventLog object that represents the Application event log, we would simply write the code in Listing A.
Listing A
EventLog log = new EventLog("Application");At this point, we will have access to read the event log via the Entries property on the EventLog object. To write to the event log, you must set the Source property of the EventLog object. For instance, if your source is the Order application, you would write the code in Listing B.
Listing B
EventLog log = new EventLog("Application");
log.Source = "Order Application";log.WriteEntry("My event log entry");
That code simply writes an entry to the Application log with the specified source and message text. It is important to set the source. If you don't, you will get an exception stating that no source has been specified. Since this article is specifically about sending e-mails when errors are sent to the event log I won't spend anymore time on the basics of EventLogs — if you need more information, visit the MSDN Library.
Sending e-mails when errors are written
The event log is great, but the information contained in it is useless unless you manually go to the log and look at the entries. It would be nice if you were able to get some kind of notification when entries are written. That's where the EventLog.EntryWritten event comes into play. By subscribing to this event, we can trigger certain actions to occur based on the event type and other variables.We will use the EntryWritten event to trigger an e-mail to be sent when errors are written to the event log. To do this, we must first subscribe to the EntryWritten event as shown in Listing C.
Listing C
log.EntryWritten += new EntryWrittenEventHandler(log_EntryWritten);This subscribes to the event, but the event will never be raised unless we set EnableRaisingEvents to true on the EventLog object. (Listing D)
Listing D
log.EnableRaisingEvents = true;Now that the event is subscribed to and enabled, we need to write the event handler. (Listing E) This is where we will examine the entry to determine whether or not it is an error and decide what to do with it.
Listing E
void log_EntryWritten(object sender, EntryWrittenEventArgs e)
{
//Get a reference to the entry.
EventLogEntry entry = e.Entry;
//If this entry is an error, send an e-mail.
if (entry.EntryType == EventLogEntryType.Error)
{
//Setup the mail message.
MailMessage mail = new MailMessage(this.txtE-mailAddress.Text,
this.txtE-mailAddress.Text);
//Set the body/subject of the MailMessage
mail.Body = entry.Message;
mail.Subject = "EventLog Error From: " + entry.Source;
//Setup the SMTP client.
SmtpClient mailClient = new SmtpClient(this.txtE-mailServer.Text);
//Send the mail using the SMTP client.
mailClient.Send(mail);
}}
The code above examines the EventLogEntry.EntryType property and, if this EntryType is Error, we continue on and send an e-mail using the MailMessage and SmtpClient classes found in the System.Mail namespace. As you can see, this is all very straightforward code, just simple .NET Framework functionality.
The EntryWritten event can be used for other tasks — for example, you could catch each entry to the event log and send it to a message queue for helpdesk processing. It would even be possible to collect entries from many event logs, determine the severity of the entry, and forward those entries, based on severity, to other event logs. This would effectively be an event log router.
A couple points to keep in mind about event logsSecurity: Different logs can have different security levels. If you plan to run code such as this from a Windows service, you must ensure that the user the service is running under has access to the event log you are monitoring. Duplicate errors: There could be instances where the same application is throwing the same error to the event log on a continuing basis. If you plan to use this code in production, you will need to come up with logic to prevent your e-mail address from getting flooded with alerts. You could accomplish this by sending an alert only if the specified source (EventLogEntry.Source property) hasn't had an error for a specified period of time.
With the .NET Framework, Microsoft has given developers easy access to a wealth of possibilities in regard to custom event log handling — all that's left is for you to start exploiting this potential.
|
https://www.techrepublic.com/blog/how-do-i/how-do-i-send-e-mail-alerts-when-errors-are-written-to-the-event-log/
|
CC-MAIN-2018-51
|
refinedweb
| 860
| 64.81
|
Contents
We concentrated in Chapter 1 on computational processes and on the role of functions in program design. We saw how to use primitive data (numbers) and primitive operations (arithmetic operations), how to form compound functions through composition and control, and how to create functional abstractions by giving names to processes. We also saw that higher-order functions enhance the power of our language by enabling us to manipulate, and thereby to reason, in terms of general methods of computation. This is much of the essence of programming.
This chapter focuses on data.. Due to the explosive growth of the Internet, a vast amount of structured information about the world is freely available to all of us online.
In the beginning of this text, we distinguished between functions and data: functions performed operations and data were operated upon. When we included function values among our data, we acknowledged that data too can have behavior. Functions could be operated upon like data, but could also be called to perform computation.
In this text, as letters and numerals.(2012, 9, 10)
While today was constructed from primitive numbers, it behaves like a date. For instance, subtracting it from another date will give a time difference, which we can display as a line of text by calling str.
>>> str(date(2012, 11, 30) - today) '81.
>>> today.year 2012 today takes a single argument that specifies how to display a date (e.g., %A means that the day of the week should be spelled out in full).
>>> today.strftime('%A, %B %d') 'Monday, September 10'
Computing the return value of strftime requires two inputs: the string that describes the format of the output and the date information bundled into today. Date-specific logic is applied within this method to yield this result. We never stated that the 10th of September, 2012,.
Every object in Python has a type. The type function allows us to inspect the type of an object.
>>> type(today) <class 'datetime.date'>
So far, the types of objects we have used extensively are relatively few: numbers, functions, Booleans, and now dates. We also briefly encountered sets and strings in Chapter 1, text, the online book Dive Into Python 3 gives a pragmatic overview of all Python's native data types and how to use them effectively, including numerous usage examples and practical tips.
As we consider the wide set of things in the world that we would like to represent in our programs, we find that most of them have compound structure. A date has a year, a month, and a day; is a methodology that enables us to isolate how a compound data object typical of the typical way most Python programmers would implement these ideas in the language. What we write is instructive, however, because it demonstrates how these abstractions can be constructed! Remember that computer science isn't just about learning to use programming languages, but also understanding.
We know from using functional abstractions that we can start programming productively before we have an implementation of some parts of our program. the following three functions:
We are using here a powerful strategy of synthesis: wishful thinking. We haven't yet said how a rational number is represented, or how the functions numer, denom, and rational should be implemented. Even so, if we did have these three functions, we could then add, multiply, and test equality of rational numbers by calling them:
>>> eq_rationals, expressed using rational(n, d): return (n, d)
>>> def numer(x): return getitem(x, 0)
>>> def denom(x): return getitem(x, 1)
A function for printing rational numbers completes our implementation of this abstract data type.
>>> def rational_to_string(x): """Return a string 'n/d' for numerator n and denominator d.""" return '{0}/{1}'.format(numer(x), denom(x))
Together with the arithmetic operations we defined earlier, we can manipulate rational numbers with the functions we have defined.
>>> half = rational(1, 2) >>> rational_to_string(half) '1/2' >>> third = rational(1, 3) >>> rational_to_string(mul_rationals(half, third)) '1/6' >>> rational_to_string(add_rationals(third, third)) '6/9'
As the final example shows, our rational number implementation does not reduce rational numbers to lowest terms. We can remedy this by changing
>>> rational_to_string(add_rationals(third, third)) '2/3'
as desired. This modification was accomplished by changing the constructor without changing any of the functions that implement the actual arithmetic operations.
Further reading. The rational_to_string_rationals, mul_rationals, and eq_rationals. These, in turn, are implemented solely in terms of the constructor and selectors rational,: rational, pair and getitem_pair that fulfill this description just as well as a tuple.
>>> def pair(x, y): """Return a function that behaves like a two-element tuple.""" def dispatch(m): if m == 0: return x elif m == 1: return y return dispatch
>>> def getitem_pair(p, i): """Return the element at index i of pair p.""" return p(i)
With this implementation, we can create and manipulate pairs.
>>> p = pair(20, 12) >>> getitem_pair(p, 0) 20 >>> getitem_pair(p, 1) 12
This use of functions corresponds to nothing like our intuitive notion of what data should be. Nevertheless, these functions suffice to represent compound data in our programs.
The subtle point to notice is that the value returned by university in the world, or every student in every university. introduce built-in Python types that implement the sequence abstraction. We then develop our own abstract data type that can implement the same abstraction.).
Multiple assignment and return values. In Chapter 1, we saw that Python allows multiple names to be assigned in a single statement.
>>> from math import pi >>> radius = 10 >>> area, circumference = pi * radius * radius, 2 * pi * radius >>> area 314.1592653589793 >>> circumference 62.83185307179586
We can also return multiple values from a function.
>>> def divide_exact(n, d): return n // d, n % d >>> quotient, remainder = divide_exact(10, 3) >>> quotient 3 >>> remainder 1
Python actually uses tuples to represent multiple values separated by commas. This is called tuple packing.
>>> digits = 1, 8, 2, 8 >>> digits (1, 8, 2, 8) >>> divide_exact(10, 3) (3, 1)
Using a tuple to assign to multiple names is called, as one might expect, tuple unpacking. The names may or may not be enclosed by parentheses.
>>> d0, d1, d2, d3 = digits >>> d2 2 >>> (quotient, remainder) = divide_exact(10, 3) >>> quotient 3 >>> remainder 1
Multiple assignment is just the combination of tuple packing and unpacking.
Arbitrary argument lists. Tuples can be used to define a function that takes in an arbitrary number of arguments, such as the built-in print function. We precede a parameter name with a * to indicate that an arbitrary number of arguments can be passed in for that parameter. Python automatically packs those arguments into a tuple and binds the parameter name to that tuple..
>>> def add_all(*args): """Compute the sum of all arguments.""" total, index = 0, 0 while index < len(args): total = total + args[index] index = index + 1 return total
>>> add_all(1, 3, 2) 6
In addition, we can use the * operator to unpack a tuple to pass its elements as separate arguments to a function call.
>>> pow(*(2, 3)) 8
As can be seen here, tuples are used to provide many of the features that we have been using in Python.
Mapping is itself an instance of a general pattern of computation: iterating over all elements in a sequence. To map a function over a sequence, we do not just select a particular element, but each element in turn. This pattern is so common that Python has an additional control statement to process sequential data: the for statement.
Consider the problem of counting how many times a value appears in a sequence. We can implement a function to compute this count using a while loop.
>>> def count(s, value): """Count the number of occurrences of value in sequence s.""" total, index = 0, 0 while index < len(s): if s[index] == value: total = total + 1 index = index + 1 return total
>>> count(digits, 8) 2
The Python for statement can simplify this function body by iterating over the element values directly, without introducing the name index at all. For example (pun intended), we can write:
>>> def count(s, value): """Count the number of occurrences of value in sequence s.""" total = 0 for elem in s: if elem == value: total = total + 1 return total
>>> count(digits, 8) 2
A for statement consists of a single clause with the form:
for <name> in <expression>: <suite>
A for statement is executed by the following procedure:
Step 1 refers to an iterable value. Sequences are iterable, and their elements are considered in their sequential order. Python does include other iterable types, but we will focus on sequences for now; the general definition of the term "iterable" appears in the section on iterators in Chapter 4.
An important consequence of this evaluation procedure is that <name> will be bound to the last element of the sequence after the for statement is executed. The for loop introduces yet another way in which the pairs that have the same first and second element.
>>> same_count = 0
The following for statement with two names in its header will bind each name x and y to the first and second elements in each pair, respectively.
>>> for x, y in pairs: if x == y: same_count = same_count + 1
>>> same_count 2
This pattern of binding multiple names to multiple values in a fixed-length sequence is called sequence unpacking; it is the same pattern that we see in assignment statements that bind multiple names to multiple values.
Ranges. A range is another built-in type of sequence in Python, which represents a range of integers. Ranges are created and ranges. Both satisfy the conditions with which we began this section: length and element selection. Python includes two more behaviors of sequence types that extend the sequence abstraction.
Membership. A value can be tested for membership in a sequence. Python has two operators in and not in that evaluate to True or False depending on whether an element appears in a sequence.
>>> digits (1, 8, 2, 8) >>> 2 in digits True >>> 1828 not in digits True span of the original sequence, designated by a pair of integers. As with the range constructor, the first integer indicates the starting index of the slice and the second indicates one beyond the ending index.
In Python, sequence slicing is expressed similarly to element selection, using square brackets. A colon separates the starting and ending indices. Any bound that is omitted is assumed to be an extreme value: 0 for the starting index, and the length of the sequence for the ending index.
>>> digits[0:2] (1, 8) >>> digits[1:] (8, 2, 8)
Enumerating these additional behaviors of the Python sequence abstraction gives us an opportunity to reflect upon what constitutes a useful data abstraction in general. The richness of an abstraction (that is, how many behaviors it includes) has consequences. For users of an abstraction, additional behaviors can be helpful. On the other hand, satisfying the requirements of a rich abstraction with a new data type can be challenging. To ensure that our implementation of recursive lists supported these additional behaviors would require some work. Another negative consequence of rich abstractions is that they take longer for users to learn.
Sequences have a rich abstraction because they are so ubiquitous in computing that learning a few complex behaviors is justified. In general, most user-defined abstractions should be kept as simple as possible.
Further reading. Slice notation admits a variety of special cases, such as negative starting values, ending values, and step sizes. A complete description appears in the subsection called slicing a list in Dive Into Python 3. In this chapter, we will only use the basic features described above..
We visualize pairs (two-element tuples) in environment diagrams using box-and-pointer notation. Pairs are depicted as boxes with two parts: the left part contains (an arrow to) the first element of the pair and the right part contains the second. Simple values such as numbers, strings, boolean values, and None appear within the box. Composite values, such as function values and other pairs, are connected by a pointer.
We can use recursion to process an arbitrary nesting of pairs. For example, let's write a function to compute the sum of all integer elements in a nesting of pairs and integers.
The sum_elems function computes the sum of integer elements in a nested pair by recursively computing the sums of its first and second elements and adding the results. The base case is when an element is an integer, in which case the sum is the integer itself. the closure property if the result of combination can itself be combined using the same method. Closure is the key to power in any means of combination because it permits us to create hierarchical structures --- structures made up of parts, which themselves are made up of parts, and so on. We will explore a range of hierarchical structures in Chapter 3. For now, we consider a particularly important structure.
We can use nested pairs to form lists of elements of arbitrary length, which will allow us to implement the sequence abstraction. The environment diagram. This structure can be constructed using the nested tuple literal above.
This nested structure corresponds to a very useful way of thinking about sequences in general. discuss later. A recursive list can be constructed from a first element and the rest of the list. The value None represents an empty recursive list.
>>> empty_rlist = None >>> def rlist(first, rest): """Construct = rlist(1, rlist(2, rlist(3,. We can also implement length and element selection using recursion.
>>> def len_rlist_recursive(s): """Return the length of a recursive list s.""" if s == empty_rlist: return 0 return 1 + len_rlist_recursive(rest(s))
>>> def getitem_rlist_recursive(s, i): """Return the element at index i of recursive list s.""" if i == 0: return first(s) return getitem_rlist_recursive(rest(s), i - 1)
>>> len_rlist_recursive(counts) 4 >>> getitem_rlist_recursive(counts, 1) 2
These recursive implementations follow the chain of pairs until the end of the list (in len_rlist_recursive) or the desired element (in getitem_rlist_recursive) is reached.
Recursive lists can be manipulated using both iteration and recursion. In Chapter 3, however, we will see more complicated examples of recursive data structures that will require recursion to manipulate easily.
Let us return to the iterative way of implementing length and element selection. The series of environment diagrams below illustrate the iterative process by which getitem_rlist finds the element 2 at index 1 in the recursive list. Below, we have defined the rlist counts using Python primitives to simplify the diagrams. This implementation choice violate the abstraction barrier for the rlist data type, but allows us to inspect the computational process more easily for this example.
First, the function getitem_rlist is called, creating a local frame.
The expression in the while header evaluates to true, which causes the assignment statement in the while suite to be executed. The function rest returns the sublist starting with 2.
Next, the local name s will be updated to refer, which will also be returned.
Text values are perhaps more fundamental to computer science than even numbers. As a case in point, Python programs are written and stored as text. The native data type for text in Python is called a string, and corresponds to the constructor str.
There are many details of how strings are represented, expressed, and manipulated in Python. Strings are another example of a rich abstraction, one which requires a substantial commitment on the part of the programmer to master. This section serves as a condensed introduction to essential string behaviors.
String literals can express arbitrary text, surrounded by either single or double quotation marks.
>>> 'I am string!' 'I am string!' >>> "I've got an apostrophe" "I've got an apostrophe" >>> '您好' '您好'
We have seen strings already in our code, as docstrings, in calls to print, and as error messages in assert statements.
Strings satisfy the two basic conditions of a sequence that we introduced at the beginning of this section: they have a length and they support element selection.
>>>>> len(city) 8 >>> city[3] 'k'
The elements of a string are themselves strings that have only a single character. A character is any single letter of the alphabet, punctuation mark, or other symbol. Unlike many other programming languages, Python does not have a separate character type; any text is a string, and strings that represent single characters have a length of 1.
Like tuples, strings can also be combined via addition and multiplication.
>>> 'Berkeley' + ', CA' 'Berkeley, CA' >>> 'Shabu ' * 2 'Shabu Shabu '
Membership. The behavior of strings diverges from other sequence types in Python. The string abstraction does not conform to the full sequence abstraction that we described for
Multiline Literals. Strings aren't limited to a single line. Triple quotes delimit string literals that span multiple lines. We have used this triple quoting extensively already for docstrings.
>>> """The Zen of Python claims, Readability counts. Read more: import this.""" 'The Zen of Python\nclaims, "Readability counts."\nRead more: import this.'
In the printed result above, the \n (pronounced "backslash en") is a single element that represents a new line. Although it appears as two characters (backslash and "n"), it is considered a single character for the purposes of length and element selection.
String Coercion. A string can be created from any object in Python by calling the str constructor function with an object value as its argument. This feature of strings is useful for constructing descriptive strings from objects of various types.
>>> str(2) + ' is an element of ' + str(digits) '2 is an element of (1, 8, 2, 8)' even members of the first n technique for creating modular programs is to introduce new kinds. Mutable objects (also called mutable values) can change throughout the execution of a program.
The list is Python's most useful and flexible sequence type. A list is similar to a tuple, but it is mutable: Method calls and assignment statements can change the contents of a list._suits = ['coin', 'string', 'myriad'] # A list literal >>> suits = chinese_suits # has also changed, because it is the same list object that was bound to suits!
>>> chinese_suits # This name co-refers with "suits" to the same uses an extended syntax for creating lists, analogous to the syntax of generator expressions.']
List comprehensions reinforce the paradigm of data processing using the conventional interface of sequences, as list is a sequence data type..
Dictionaries can appear in environment diagrams as well.
The dictionary abstraction also supports various methods of iterating of and generator expressions. Evaluating a dictionary comprehension yields its local environment.
Practical Guidance...
Implementing Dictionaries. We can implement an abstract data type that conforms to the dictionary abstraction as a list of records, each of which is a two-element list consisting of a key and the associated value.
>>> def dictionary(): "" = dictionary() >>>.
The dispatch function is a general method for implementing a message passing interface for an abstract data type. To implement message dispatch, we have thus far used a large conditional statement to look up function values using message strings.
The built-in dictionary data type provides a general method for looking up a value for a key. Instead of using dispatch functions to implement abstract data types,. promote the use of these. Objects communicate with each other, and useful results are computed as a consequence of their interaction. its class. To implement new types of data, we create new classes.
A class serves as a template for all objects whose type is that class. Every object is an instance of some particular class. The objects we have used so far all have built-in classes, but new classes can be defined similarly to how new functions can be defined. A class definition specifies the attributes and methods shared among objects of that class. We will introduce the class statement by revisiting the example of a bank account.
When introducing local state, we saw that bank accounts are naturally modeled as mutable values that have a balance. A bank account object should have a withdraw method that updates the account balance and returns the requested amount, if it is available. We would like additional behavior to complete the account abstraction: a bank account should be able to return its current balance, return the name of the account holder, and accept deposits.
An Account class allows us to create multiple instances of bank accounts. The act of creating a new object instance is known as instantiating the class. The syntax in Python for instantiating a class is identical to the syntax of calling a function. In this case, we call Account with the argument 'Jim', the account holder's name.
>>> a = Account('Jim')
An attribute of an object is a name-value pair associated with the object, which is accessible via dot notation. The attributes specific to a particular object, as opposed to all objects of a class, are called instance attributes. Each Account has its own balance and account holder name, which are examples of instance attributes. In the broader programming community, instance attributes may also be called fields, properties, or instance variables.
>>> a.holder an error message.
>>> a.withdraw(10) # The withdraw method returns the balance after withdrawal 5 >>> a.balance # The balance attribute has changed 5 >>> a.withdraw(10) 'Insufficient funds'
As illustrated above, the behavior of a method can depend upon the changing attributes of the object. Two calls to withdraw with the same argument return different results.
User-defined classes are created by class statements, which consist of a single clause. A class statement defines the class name and a base class (discussed in the section on Inheritance), then includes a suite of statements to define the attributes of the class:
class <name>(<base class>): <suite>
When a class statement is executed, a new class is created and bound to <name> in the first frame of the current environment. The suite is then executed. Any names bound within the <suite> of a class statement, through def or assignment statements, create or modify attributes of the class.
Classes are typically organized around manipulating instance attributes, which are the name-value pairs associated not with the class itself, but with each object of that class. The class specifies the instance attributes of its objects by defining a method for initializing new objects. For instance, part of initializing an object of the Account class is to assign it a starting balance of 0.
The <suite> of a class statement contains def statements that define new methods for objects of that class. The method that initializes objects has a special name in Python, __init__ (two underscores on each side of "init"), and is called the constructor for the class.
>>> class Account(object): def __init__(self, account_holder): self.balance = 0 self.holder = account_holder
The __init__ method for Account has two formal parameters. The first one, self, is bound to the newly created Account object. The second parameter, account_holder, is bound to the argument passed to the class when it is called to be instantiated.
The constructor binds the instance attribute name balance to 0. It also binds the attribute name holder to the value of the name account_holder. The formal parameter account_holder is a local name'. By convention, we use the parameter name self for the first argument of a constructor, because it is bound to the object being instantiated. This convention is adopted in virtually all Python code.
Now, we can access the object's balance and holder using dot notation.
>>> a.balance 0 >>> a.holder 'Jim'
Identity. Each new account instance has its own balance attribute, the value of which is independent of other objects of the same class.
>>> b = Account('Jack') >>> b.balance = 200 >>> [acc.balance for acc in (a, b)] [0, 200]
To enforce this separation, every object that is an instance of a user-defined class has a unique identity. Object identity is compared using the is and is not operators.
>>> a is a True >>> a is not b True
Despite being constructed from identical calls, the objects bound to a and b are not the same. As usual, binding an object to a new name using assignment does not create a new object.
>>> c = a >>> c is a True
New objects that have user-defined classes are only created when a class (such as Account) is instantiated with call expression syntax.
Methods. Object methods are also defined by a def statement in the suite of a class statement. Below, deposit and withdraw are both defined as methods on objects of the Account class.
>>> class Account(object): def __init__(self, account_holder): self.balance = 0 self.holder = account_holder def deposit(self, amount): self.balance = self.balance + amount return self.balance def withdraw(self, amount): if amount > self.balance: return 'Insufficient funds' self.balance = self.balance - amount return self.balance
While method definitions do not differ from function definitions in how they are declared, method definitions do have a different effect. The function value that is created by a def statement within a class statement is bound to the declared name, but bound locally within the class as an attribute. That value is invoked as a method using dot notation from an instance of the class.
Each method definition again includes a special first parameter self, which is bound to the object on which the method is invoked. For example, let us say that deposit is invoked on a particular Account object and passed a single argument value: the amount deposited. The object itself is bound to self, while the argument is bound to amount. All invoked methods have access to the object via the self parameter, and so they can all access and manipulate the object's state.
To invoke these methods, we again use dot notation, as illustrated below.
>>> tom_account = Account('Tom') >>> tom_account.deposit(100) 100 >>> tom_account.withdraw(90) 10 >>> tom_account.withdraw(90) 'Insufficient funds' >>> tom_account.holder 'Tom'
When a method is invoked via dot notation, the object itself (bound to tom_account, in this case) plays a dual role. First, it determines what the name withdraw means; withdraw is not a name in the environment, but instead a name that is local to the Account class. Second, it is bound to the first parameter self when the withdraw method is invoked. The details of the procedure for evaluating dot notation follow in the next section.
Methods, which are defined in classes, and instance attributes, which are typically assigned in constructors, are the fundamental elements of object-oriented programming. These two concepts replicate much of the behavior of a dispatch dictionary in a message passing implementation of a data value. Objects take messages using dot notation, but instead of those messages being arbitrary string-valued keys, they are names local to a class. Objects also have named local state values (the instance attributes), but that state can be accessed and manipulated using dot notation, without having to employ nonlocal statements in the implementation.
The central idea in message passing was that data values should have behavior by responding to messages that are relevant to the abstract type they represent. Dot notation is a syntactic feature of Python that formalizes the message passing metaphor. The advantage of using a language with a built-in object system is that message passing can interact seamlessly with other language features, such as assignment statements. We do not require different messages to "get" or "set" the value associated with a local attribute name; the language syntax allows us to use the message name directly.
Dot expressions. The code fragment tom_account.deposit is called a dot expression. A dot expression consists of an expression, a dot, and a name:
<expression> . <name>
The <expression> can be any valid Python expression, but the <name> must be a simple name (not an expression that evaluates to a name). A dot expression evaluates to the value of the attribute with the given <name>, for the object that is the value of the <expression>.
The built-in function getattr also returns an attribute for an object by name. It is the function equivalent of dot notation. Using getattr, we can look up an attribute using a string, just as we did with a dispatch dictionary.
>>> getattr and functions. When a method is invoked on an object, that object is implicitly passed as the first argument to the method. That is, the object that is the value of the <expression> to the left of the dot is passed automatically as the first argument to the method named on the right side of the dot expression. As a result, the object is bound to the parameter self.
To achieve automatic self binding, Python distinguishes between functions, which we have been creating since the beginning of the text, and bound methods, which couple together a function and the object on which that method will be invoked. A bound method value is already associated with its first argument, the instance on which it was invoked, which will be named self when the method is called.
We can see the difference in the interactive interpreter by calling type on the returned values of dot expressions. As an attribute of a class, a method is just a function, but as an attribute of an instance, it is a bound method:
>>> type(Account.deposit) <class 'function'> >>> type(tom_account.deposit) <class 'method'>
These two results differ only in the fact that the first is a standard two-argument function with parameters self and amount. The second is a one-argument method, where the name self will be bound to the object named tom_account automatically when the method is called, while the parameter amount will be bound to the argument passed to the method. Both of these values, whether function values or bound method values, are associated with the same deposit function body.
We can call deposit in two ways: as a function and as a bound method. In the former case, we must supply an argument for the self parameter explicitly. In the latter case, the self parameter is bound automatically.
>>> Account.deposit(tom_account, 1001) # The deposit function requires 2 arguments 1011 >>> tom_account.deposit(1000) # The deposit method takes 1 argument 2011
The function getattr behaves exactly like dot notation: if its first argument is an object but the name is a method defined in the class, then getattr returns a bound method value. On the other hand, if the first argument is a class, then getattr returns the attribute value directly, which is a plain function.
Practical guidance: naming conventions. Class names are conventionally written using the CapWords convention (also called CamelCase because the capital letters in the middle of a name are like humps). Method names follow the standard convention of naming functions using lowercased words separated by underscores.
In some cases, there are instance variables and methods that are related to the maintenance and consistency of an object that we don't want users of the object to see or use. They are not part of the abstraction defined by a class, but instead part of the implementation. Python's convention dictates that if an attribute name starts with an underscore, it should only be accessed within methods of the class itself, rather than by users of the class.
Some attribute values are shared across all objects of a given class. Such attributes are associated with the class itself, rather than any individual instance of the class. For instance, let us say that a bank pays interest on the balance of accounts at a fixed interest rate. That interest rate may change, but it is a single value shared across all accounts.
Class attributes are created by assignment statements in the suite of a class statement, outside of any method definition. In the broader developer community, class attributes may also be called class variables or static variables. The following class statement creates a class attribute for Account with the name interest.
>>> class Account_account.interest 0.04
Attribute names. We have introduced enough complexity into our object system that we have to specify how names are resolved to particular attributes. After all, we could easily have a class attribute and an instance attribute with the same name.
As we have seen, a dot.
Assignment. All assignment statements that contain a dot expression on their left-hand side affect attributes for the object of that dot expression. If the object is an instance, then assignment sets an instance attribute. If the object is a class, then assignment sets a class attribute. As a consequence of this rule, assignment to an attribute of an object cannot affect the attributes of its class. The examples below illustrate this distinction.
If we assign to the named attribute interest of an account instance, we create a new instance attribute that has the same name as the existing class attribute.
>>> types are related. In particular, we find that similar classes differ in their amount of specialization. Two classes may have similar attributes, but one represents a special case of the other.
For example, we may want to implement a checking account, which is different from a standard account. A checking account charges an extra $1 for each withdrawal and has a lower interest rate. Here, we demonstrate the desired behavior.
>>> ch = CheckingAccount('Tom') >>> ch.interest # Lower interest rate for checking accounts 0.01 >>> ch.deposit(20) # Deposits are the same 20 >>> ch.withdraw(5) # withdrawals decrease balance by an extra charge 14
A CheckingAccount is a specialization of an Account. In OOP terminology, the generic account will serve as the base class of CheckingAccount, while CheckingAccount will be a subclass of Account. (The terms parent class and superclass are also used for the base class, while child class is also used for the subclass.)
A subclass inherits the attributes of its base class, but may override certain attributes, including certain methods. With inheritance, we only specify what is different between the subclass and the base class. Anything that we leave unspecified in the subclass is automatically assumed to behave just as it would for the base class.
Inheritance also has a role in our object metaphor, in addition to being a useful organizational feature. Inheritance is meant to represent is-a relationships between classes, which contrast with has-a relationships. A checking account is-a specific type of account, so having a CheckingAccount inherit from Account is an appropriate use of inheritance. On the other hand, a bank has-a list of bank accounts that it manages, so neither should inherit from the other. Instead, a list of account objects would be naturally expressed as an instance attribute of a bank object.
We specify inheritance by putting the base class in parentheses after the class name. First, we give a full implementation of the Account class, which includes docstrings for the class and its methods.
>>> class Account(object): """A bank account that has a non-negative balance.""" interest = 0.02 def __init__(self, account_holder): self.balance = 0 self.holder = account_holder def deposit(self, amount): """Increase the account balance by amount and return the new balance.""" self.balance = self.balance + amount return self.balance def withdraw(self, amount): """Decrease the account balance by amount and return the new balance.""" if amount > self.balance: return 'Insufficient funds' self.balance = self.balance - amount return self.balance
A full implementation of CheckingAccount appears below.
>>> class CheckingAccount(Account): """A bank account that charges for withdrawals.""" withdraw_charge = 1 interest = 0.01 def withdraw(self, amount): return Account.withdraw(self, amount + self.withdraw_charge)
Here, we introduce a class attribute withdraw_charge that is specific to the CheckingAccount class. We assign a lower value to the interest attribute. We also define a new withdraw method to override the behavior defined in the Account class. With no further statements in the class suite, all other behavior is inherited from the base class Account.
>>> checking = CheckingAccount('Sam') >>> checking.deposit(10) 10 >>> checking.withdraw(5) 4 >>> checking.interest 0.01
The expression checking.deposit evaluates to a bound method for making deposits, which was defined in the Account class. When Python resolves a name in a dot expression that is not an attribute of the instance, it looks up the name in the class. In fact, the act of "looking up" a name in a class tries to find that name in every base class in the inheritance chain for the original object's class. We can define this procedure recursively. To look up a name in a class.
In the case of deposit, Python would have looked for the name first on the instance, and then in the CheckingAccount class. Finally, it would look in the Account class, where deposit is defined. According to our evaluation rule for dot expressions, since deposit is a function looked up in the class for the checking instance, the dot expression evaluates to a bound method value. That method is invoked with the argument 10, which calls the deposit method with self bound to the checking object and amount bound to 10.
The class of an object stays constant throughout. Even though the deposit method was found in the Account class, deposit is called with self bound to an instance of CheckingAccount, not of Account.
Calling ancestors. Attributes that have been overridden are still accessible via class objects. For instance, we implemented the withdraw method of CheckingAccount by calling the withdraw method of Account with an argument that included the withdraw_charge.
Notice that we called self.withdraw_charge rather than the equivalent CheckingAccount.withdraw_charge. The benefit of the former over the latter is that a class that inherits from CheckingAccount might override the withdrawal charge. If that is the case, we would like our implementation of withdraw to find that new value instead of the old one.
Object Abstractions. It is extremely common in object-oriented programs that different types of objects will share the same attribute names. An object abstraction is a collection of attributes and conditions on those attributes. For example, all accounts must have deposit and withdraw methods that take numerical arguments, as well as a balance attribute. The classes Account and CheckingAccount both implement this abstraction. Inheritance specifically promotes name sharing in this way.
The parts of your program that use objects (rather than implementing them) are most robust to future changes if they do not make assumptions about object types, but instead only about their attribute names. That is, they use the object abstraction, rather than assuming anything about its implementation. For example, let us say that we run a lottery, and we wish to deposit $5 into each of a list of accounts. The following implementation does not assume anything about the types of those accounts, and therefore works equally well with any type of object that has a deposit method:
>>> def deposit_all(winners, amount=5): for account in winners: account.deposit(amount)
The function deposit_all above assumes only that each account satisfies the account object abstraction, and so it will work with any other account classes that also implement this abstraction. Assuming a particular class of account would violate the abstraction barrier of the account object abstraction. For example, the following implementation will not necessarily work with new kinds of accounts:
>>> def deposit_all(winners, amount=5): for account in winners: Account.deposit(account, amount)
Python supports the concept of a subclass inheriting attributes from multiple base classes, a language feature called multiple inheritance.
Suppose that we have a SavingsAccount that inherits from Account, but charges customers a small fee every time they make a deposit.
>>> class SavingsAccount(Account): deposit_charge = 2 def deposit(self, amount): return Account.deposit(self, amount - self.deposit_charge)
Then, a clever executive conceives of an AsSeenOnTVAccount account with the best features of both CheckingAccount and SavingsAccount: withdrawal fees, deposit fees, and a low interest rate. It's both a checking and a savings account in one! "If we build it," the executive reasons, "someone will sign up and pay all those fees. We'll even give them a dollar."
>>> class AsSeenOnTVAccount(CheckingAccount, SavingsAccount): def __init__(self, account_holder): self.holder = account_holder self.balance = 1 # A free dollar!
In fact, this implementation is complete. Both withdrawal and deposits will generate fees, using the function definitions in CheckingAccount and SavingsAccount respectively.
>>> such_a_deal = AsSeenOnTVAccount("John") >>> such_a_deal.balance 1 >>> such_a_deal.deposit(20) # $2 fee from SavingsAccount.deposit 19 >>> such_a_deal.withdraw(5) # $1 fee from CheckingAccount.withdraw 13
Non-ambiguous references are resolved correctly as expected:
>>> such_a_deal.deposit_charge 2 >>> such_a_deal.withdraw_charge 1
But what about when the reference is ambiguous, such as the reference to the withdraw method that is defined in both Account and CheckingAccount? The figure below depicts an inheritance graph for the AsSeenOnTVAccount class. Each arrow points from a subclass to a base class.
For a simple "diamond" shape like this, Python resolves names from left to right, then upwards. In this example, Python checks for an attribute name in the following classes, in order, until an attribute with that name is found:
AsSeenOnTVAccount, CheckingAccount, SavingsAccount, Account, object
There is no correct solution to the inheritance ordering problem, as there are cases in which we might prefer to give precedence to certain inherited classes over others. However, any programming language that supports multiple inheritance must select some ordering in a consistent way, so that users of the language can predict the behavior of their programs.
Further reading. Python resolves this name using a recursive algorithm called the C3 Method Resolution Ordering. The method resolution order of any class can be queried using the mro method on all classes.
>>> [c.__name__ for c in AsSeenOnTVAccount.mro()] ['AsSeenOnTVAccount', 'CheckingAccount', 'SavingsAccount', 'Account', 'object']
The precise algorithm for finding method resolution orderings is not a topic for this text, but is described by Python's primary author with a reference to the original paper.
In some languages, such as Python and Ruby, functions themselves are objects, i.e. instances of special classes. For example, functions built into the interpreter are instances of the builtin_function_or_method class:
>>> type(pow) <class 'builtin_function_or_method'>
Similarly, as demonstrated above for the deposit function and method, user-defined functions are instances of function, and user-defined methods are instances of method:
>>> type(Account.deposit) <class 'function'> >>> type(tom_account.deposit) <class 'method'>
By calling mro on the resulting types, we can see that they all inherit from object, just like the classes we defined above. (The names builtin_function_or_method, function, and method are not bound in the global namespace, so we cannot directly call mro on them.)
>>> [c.__name__ for c in type(pow).mro()] ['builtin_function_or_method', 'object'] >>> [c.__name__ for c in type(Account.deposit).mro()] ['function', 'object'] >>> [c.__name__ for c in type(tom_account.deposit).mro()] ['method', 'object']
Since functions are objects, they have attributes just like any other object. For example, all functions have a __name__ attribute:
>>> pow.__name__ 'pow'
Attributes can also be added to user-defined functions, and most existing attributes can be modified. Methods and built-in functions, however, don't allow adding or changing attributes.
>>> def my_pow(x, y): return pow(x, y)
>>> my_pow <function my_pow at 0x7f77b2558270> >>> my_pow.__name__ = "power" # change attribute >>> my_pow <function power at 0x7f77b2558270> >>> my_pow.value = 42 # new attribute >>> my_pow.value 42
(Note: As of Python 3.3, functions have a __qualname__ attribute that is used in many places instead of __name__. In the example above, it is the __qualname__ attribute that must be changed instead of __name__ to achieve the same behavior in Python 3.3.)
Function attributes allow us to store function-specific data with the function itself, without polluting the global namespace.
The Python object system is designed to make data abstraction and message passing both convenient and flexible. The specialized syntax of classes, methods, inheritance, and dot expressions all enable us to formalize the object metaphor in our programs, which improves our ability to organize large programs.
In particular, we would like our object system to promote a separation of concerns among the different aspects of the program. Each object in a program encapsulates and manages some part of the program's state, and each class statement defines the functions that implement some part of the program's overall logic. Abstraction barriers enforce the boundaries between different aspects of a large program.
Object-oriented programming is particularly well-suited to programs that model systems that have separate but interacting parts. For instance, different users interact in a social network, different characters interact in a game, and different shapes interact in a physical simulation. When representing such systems, the objects in a program often map naturally onto objects in the system being modeled, and classes represent their types and relationships.
On the other hand, classes may not provide the best mechanism for implementing certain abstractions. Functional abstractions provide a more natural metaphor for representing relationships between inputs and outputs. One should not feel compelled to fit every bit of logic in a program within a class, especially when defining independent functions for manipulating data is more natural. Functions can also enforce a separation of concerns.
Multi-paradigm languages:
>>>.
We now return to use the bank account example from the previous section. Using our implemented object system, we will create an Account class, a CheckingAccount subclass, and an instance of each.
The Account class is created through a make_account_class function, which has structure similar to a class statement in Python, but concludes with a call to make_class.
>>> def make_account_class(): """Return the Account class, which has deposit and withdraw methods."""())
The final call to locals returns a dictionary with string keys that contains the name-value bindings in the current local frame..""" interest = 0.01 withdraw_fee = 1 def withdraw(self, amount): fee = self['get']('withdraw_fee') return Account['get']('withdraw')(self, amount + fee) return make_class(locals(),.
Python stipulates that all objects should produce two different string representations: one that is human-interpretable text and one that is a Python-interpretable expression. The constructor function for strings, str, returns a human-readable string. Where possible, the repr function returns a Python expression that evaluates to an equal object. The docstring for repr explains this property:
repr(object) -> string Return the canonical string representation of the object. For most object types, eval(repr(object)) == object.
The result of calling repr on the value of an expression is what Python prints in an interactive session.
>>> 12e12 12000000000000.0 >>> print(repr(12e12)) 12000000000000.0
In cases where no representation exists that evaluates to the original value, Python. However, in large programs, it may not always make sense to speak of "the underlying representation" for a data type in a program. the rectangular form is more appropriate and sometimes the polar form is more appropriate. Indeed, it is perfectly plausible to imagine a system in which complex numbers are represented in both ways, and in which the functions for manipulating complex numbers work with either representation.
More importantly, large software systems are often designed by many people working over extended periods of time, subject to requirements that change over time. In such an environment, it is simply not possible for everyone to agree in advance on choices of data representation. coordinates.
When multiplying complex numbers, it is more natural to think in terms of representing a complex number in polar form, as a magnitude and an angle. The product of two complex numbers is the vector obtained by stretching one complex number by a factor.)
True and false values. We saw previously that numbers in Python have a truth value; more specifically, 0 is a false value and all other numbers are true values. In fact, all objects in Python have a truth value. By default, objects are considered to be true, but the special __bool__ method can be used to override this behavior. If an object defines the __bool__ method, then Python calls that method to determine its truth value.
As an example, suppose we want the complex number 0 + 0 * i to be false. We can define the __bool__ method for both our complex number implementations.
>>> ComplexRI.__bool__ = lambda self: self.real != 0 or self.imag != 0 >>> ComplexMA.__bool__ = lambda self: self.magnitude != 0
We can call the bool constructor to see the truth value of an object, and we can use any object in a boolean context.
>>> bool(ComplexRI(1, 2)) True >>> bool(ComplexRI(0, 0)) False >>> if not ComplexMA(0, 1): print("complex number is true") complex number is true
Sequence length.)
Here, the Adder class behaves like the make_adder higher-order function, and the add_three_obj object behaves like the add_three function. We have further blurred the line between data and functions.
Though callable objects are less efficient than higher-order functions, they allow us to take full advantage of the object system. For example, we can use inheritance to create a family of callable objects with shared functionality.
Further reading. Special methods generalize the built-in operators so that they can be used with user-defined objects. In order to provide this generality, Python follows specific protocols to apply each operator. For example, to evaluate expressions that contain the + operator, Python checks for special methods on both the left and right operands of the expression. First, Python checks for an __add__ method on the value of the left operand, then checks for an __radd__ method on the value of the right operand. If either is found, that method is invoked with the value of the other operand as its argument.s(x, y): nx, dx = x.numer, x.denom ny, dy = y.numer, y.denom return Rational(nx * dy + ny * dx, dx * dy)
>>> def mul_rationalss'}
Here, we store the tag set as an attribute of the type_tag function to avoid polluting the global namespace. (Recall that functions are objects and therefore may have attributes.)ss_and_complex = lambda r, z: mul_complex_and_rational(z, r) >>> apply.implementations = {('mul', ('com', 'com')): mul_complex, ('mul', ('com', 'rat')): mul_complex_and_rational, ('mul', ('rat', 'com')): mul_rationals_and_complex, ('mul', ('rat', 'rat')): mul_rationals}.
Coercion. In the general situation of completely unrelated operations acting on completely unrelated types, implementing explicit cross-type operations, cumbersome though it may be, is the best that one can hope for. Fortunately, we can sometimes a rational number with a complex number, we can view the rational number as a complex number whose imaginary part is zero.s, ('add', 'com'): add_complex, ('add', 'rat'): add_rationals} of types and each generic operation. What we are counting on here is the fact that the appropriate transformation between types depends only on the types themselves, not on the particular operation to be applied.
Further advantages come from extending coercion. Some more sophisticated coercion schemes do not just try to coerce one type into another, but instead may try to coerce two different types each into a third common type. Consider a rhombus and a rectangle: neither is a special case of the other, but both can be viewed as quadrilaterals. Another extension to coercion is iterative coercion, in which one data type is coerced into another via intermediate types. Consider that an integer can be converted into a real number by first converting it into a rational number, then converting that rational number into a real number. Chaining coercion in this way can reduce the total number of coercion functions that are required by a program.
Despite its advantages, coercion does have potential drawbacks. For one, coercion functions can lose information when they are applied. In our example, rational numbers are exact representations, but become approximations when they are converted to complex numbers.
Some programming languages have automatic coercion systems built in. In fact, early versions of Python had a __coerce__ special method on objects. In the end, the complexity of the built-in coercion system did not justify its use, and so it was removed. Instead, particular operators apply coercion to their arguments as needed..
|
http://inst.eecs.berkeley.edu/~cs61a/book/chapters/objects.html
|
CC-MAIN-2014-42
|
refinedweb
| 8,625
| 55.13
|
Closed Bug 513063 Opened 10 years ago Closed 10 years ago
Avoid bit twiddling on double values
Categories
(Core :: JavaScript Engine, defect)
Tracking
()
People
(Reporter: gal, Assigned: gal)
Details
(Whiteboard: fixed-in-tracemonkey)
Attachments
(2 files)
The attached patch performs DOUBLE_IS_INT and friends directly on floating point registers using compiler intrinsics to determine whether a number is nan or finite, instead of forcing it into memory and meddle with the bit representation.
Assignee: general → gal
Comment on attachment 397094 [details] [diff] [review] patch > static JSDHashNumber > HashDouble(JSDHashTable *table, const void *key) > { >- jsdouble d; >- > JS_ASSERT(IS_DOUBLE_TABLE(table)); >- d = *(jsdouble *)key; >- return JSDOUBLE_HI32(d) ^ JSDOUBLE_LO32(d); >+ return JS_HASH_INT64(key); Is JS_HASH_INT64 strict-aliasing safe? >-#ifdef HPUX >- /* >- * Negation of a zero doesn't produce a negative >- * zero on HPUX. Perform the operation by bit >- * twiddling. >- */ >- JSDOUBLE_HI32(d) ^= JSDOUBLE_HI32_SIGNBIT; >-#else > d = -d; >-#endif There are three such old HPUX ifdefs: $ grep HPUX *.cpp jsnum.cpp:#ifdef HPUX jsnum.cpp: * here on HPUX. Force a negative zero instead. jsops.cpp:#ifdef HPUX jsops.cpp: * zero on HPUX. Perform the operation by bit jsparse.cpp:#ifdef HPUX jsparse.cpp: * zero on HPUX. Perform the operation by bit Remove them all, or leave them all -- no 1/3rd measures :-P. r=me with these addressed. /be
Attachment #397094 - Flags: review?(brendan) → review+
void* should be exempt from alias analysis since its not a proper type. On top of that, HashDouble is called via a virtual dispatch. So if there is an alias problem here, its the double going into the void* pointer. Whether we convert the void* on the other side of the virtual call into a double* doesn't really matter. It would be too late (if this was a problem, which it isn't). I will remove the other HPUX stuff too. Thanks.
After some additional discussion with jimb we decided to not rely on the virtual dispatch as knowledge boundary. I will add a proper union conversion.
Whiteboard: fixed-in-tracemonkey
Win32 tinderboxes break on the patch as-landed. Attached patch fixes them.
Attachment #397134 - Flags: review?
Comment on attachment 397134 [details] [diff] [review] fix windows breakage OK, looks good, and anything to stop the fires.
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Flags: wanted1.9.2+
|
https://bugzilla.mozilla.org/show_bug.cgi?id=513063
|
CC-MAIN-2019-43
|
refinedweb
| 375
| 59.4
|
Help:Toolforge
Note: This page is in a draft form as part of planned improvements to Toolforge developer documentation. Some information that was previously available here has been moved to the About Toolforge page. You may also find information you are looking for linked from Portal:Toolforge.
Contents
- 1 Using Toolforge and managing your files
- 2 Tool accounts
- 3 Customizing a Tool account
- 4 Setting up code review and version control
- 5 Database access
- 6 Code samples for common languages
- 7 Submitting, managing and scheduling jobs on the grid
- 8 Email
- 9 Web server
- 10 Developing on Toolforge
- 11 Redis
- 12 Elasticsearch
- 13 Dumps
- 14 CatGraph (aka Graphserv/Graphcore)
- 15 Celery
- 16 Troubleshooting
- 17 Backups
Using Toolforge and managing your files
Toolforge can be accessed in a variety of ways – from its public IP to a GUI client. Please see Help:Access for general information about accessing Cloud VPS projects.
The tools list
The Toolforge.
Updating files
After you can ssh successfully, you can transfer files via sftp and scp. Note that the transferred files will be owned by you. You will likely wish to transfer ownership to your tool account. To do this:
0.
chgrp toolaccount FILE/project/.
See #Setting up code review and version control for more details about using source control for your tool.
Putty and WinSCP
Note that instructions for accessing Toolforge with Putty and WinSCP differ from the instructions for using them with other Cloud VPS projects. Please see Help:Access to Toolforge instances with PuTTY and WinSCP for information specific to Toolforge.
Run
webservice start and then you should be able to access the initial pre-install screen of MediaWiki from your web browser as:<YOURTOOL>/MW/
and proceed as usual. See how to create new databases for your MediaWiki installations.
Tool accounts
What is a Tool account?
A Tool account is a shared Unix account. This account acts as the "user" associated with a Tool on Toolforge. Although each tool account has a user ID, they are not personal accounts (like a Cloud VPS account), rather services that consist of a user and group ID that are intended to run the actual tool or bot. Anyone who is a member of the Toolforge project can create a Tool account..
In addition to the user/group pair,
Creating a new Tool account
Members of the ‘tools’ project can create tool accounts using toolsadmin:
- Go to.
- Click the "Create new tool" link at the bottom of the "Your tools" sidebar (if you don't see this and you were recently added to the 'tools' project, try to logoff and login again)
- Follow the instructions in the tool account creation form.
- The new tool will need a unique name. The name will become part of the URL for the final webservice, so choose wisely!
Do not prefix your tool name with
tools.. The system will do so automatically where appropriate, and there is a known issue that will cause the account to be created improperly if you do.
- Note: If you have only recently been added to the 'tools' project, you may get an error about not being a member. Simply log out and back in to toolsadmin to fix this.
The tool account will be created and you will be granted access to it within a minute or two. If you were already logged in to a Toolforge bastion through SSH, you will have to log off then back in before you can access the new tool account.
Joining an existing Tool account
All tool accounts hosted in Toolforge are listed on the Tools list. If you would like to be added to an existing account, you must contact the maintainer(s) directly.
If you would like to add (or remove) maintainers to a tool account that you manage, you may do so with the 'manage maintainers' link found beneath the tool name on the Toolforge home page.
Using a Tool account
A simple way for maintainers to switch to the tool account is with
become:
maintainer@tools-login:~$ become <TOOL NAME> tools.toolname@tools-login:~$
Troubleshooting
$ become <TOOL NAME> become: no such tool '<TOOL NAME>'
- Make sure you have typed your new tool name correctly.
- It may take a few minutes for your tool's home directory and files to be created. Wait a few minutes, and try again.
$ become <TOOL NAME> You are not a member of the group tools.<TOOL NAME>. Any existing member of the tool's group can add you to that.
- An active ssh session to login.tools.wmflabs.org will not automatically be updated with new permissions when you are added as a maintainer of a tool. If you are already logged in via ssh when you create the new tool, log out and then log in again to activate your new permissions.
Deleting a Tool account
Task T170355
You can't delete a tool account yourself, though you can delete the content of your directories and make an existing web tool inaccessible by shutting down the web service (
webservice stop). If you really want a tool account to be deleted, please follow the steps described at Toolforge (Tools to be web page for your tool (it will be linked from the Tools home page automatically) the Tool environment a Toolforge bastion using your Wikimedia developer account and become your tool account:
$ ssh tools-login.wmflabs.orgforge Cloud VPS makes it pretty easy to use Git for source control and Gerrit for code review, but you also have other options.
Using Diffusion
- Go to toolsadmin
- Find your tool
- Click the create new repository button
Requesting a Gerrit/Git repository for your tool
Toolforge users may request a Gerrit/Git repository for their tools. Access to Git is managed via Wikimedia Cloud VPS and integrated with Gerrit, a code review system.
In order to use the Wikimedia Cloud VPS code review and version control, you must upload your ssh key to Gerrit and then request a`).
There's also a tutorial for setting up the tool to be automatically updated whenever the GitHub repository is pushed to.
Database access
To connect to the English Wikipedia replica, specify the alias of the hosting cluster (enwiki.analytics.db.svc.eqiad.wmflabs) and the alias of the database replica (enwiki_p) :
mysql --defaults-file=$HOME/replica.my.cnf -h enwiki.analytics.db.svc.eqiad.wmflabs enwiki_p
To connect to the Wikidata cluster:
mysql --defaults-file=$HOME/replica.my.cnf -h wikidatawiki.analytics.db.svc.eqiad.wmflabs
To connect to Commons cluster:
mysql --defaults-file=$HOME/replica.my.cnf -h commonswiki.analytics.db.svc.eqiad.wmflabs local
This sets server to "tools.db.svc.eqiad.wmflabs" and db to "". It's equivalent to typing-
mysql --defaults-file=$HOME/replica.my.cnf -h tools.db.svc.eqiad.wmflabs.eqiad.wmflabs enwiki_p > ~/query_results-enwiki; date;
C
#include <my_global.h> #include <mysql.h> ... char *host = "tools.db.svc.eqiad.wmflabs";.eqiad.wmflabs"; my $dbh = DBI->connect( "DBI:mysql:database=$database;host=$host;" . "mysql_read_default_file=" . getpwuid($<)->dir . "/replica.my.cnf", undef, undef) or die "Error: $DBI::err, $DBI::errstr";
Python
Using User:Legoktm/toolforge library.
PHP (using PDO)
<?php $ts_pw = posix_getpwuid(posix_getuid()); $ts_mycnf = parse_ini_file($ts_pw['dir'] . "/replica.my.cnf"); $db = new PDO("mysql:host=enwiki.analytics.db.svc.eqiad.wmflabs.eqiad.wmflabs', $ts_mycnf['user'], $ts_mycnf['password'], 'enwiki_p'); // YOUR REQUEST HERE ?>.eqiad.wmflabs:3306/enwiki_p"; Connection conn = DriverManager.getConnection(url, mycnf);
Submitting, managing and scheduling jobs on the a Tool Cloud VPS.
~/.forward and
~/.forward.anything need to be readable by the user
Debian-exim; to achieve that, you probably need to
chmod o+r ~/.forward*.
Mail from Tools
From the Grid.
From within a container
To send mail from within a Kubernetes container, use the
mail.tools.wmflabs.org SMTP server.
Containers running on the Toolforge Kubernetes cluster do not install and configure a local mailer service like the exim service that is installed on grid engine nodes. Tools running in Kubernetes should instead send email using an external SMTP server. The
mail.tools.wmflabs.org service name should be usable for this. This service name is used as the public MX (mail exchange) host for inbound SMTP messages to the
tools.wmflabs.org domain and points to a server that can process both inbound and outbound email for Toolforge.
Web server
Every tool can have a dedicated web server running on either the job grid or kubernetes. The default 'lighttpd' webservice type runs a lighttpd web server configured to serve static files and PHP scripts from the tool's
$HOME/public_html directory.
You can start a tool's web server with the
webservice command:
$ become my_cool_tool $ webservice start
You can also use the
webservice command to
stop,
restart, and check the
status of the webserver. Use
webservice --help to get a full list of arguments.
Developing on Toolforge
- This is a brief summary of the /Developing documentation page.
-.
The full documentation page provides tips and instructions for developing code in the Toolforge, gerrit-to-red. This protection however should not be trusted to protect any secret data. Do not store plain text secrets or decryption keys in Redis for your own protection.
Can I use memcache?
There is no memcached on Toolforge. Please use Redis instead.
Elasticsearch
- This is a brief summary of the /Elasticsearch documentation page.
Elasticsearch is a full text search system built on Apache Lucene. It can be used to index and search data stored as JSON documents. It is the technology used to power Wikimedia's CirrusSearch system.
An Elasticsearch cluster that can be used by all tools is available on
tools-elastic-0[123], on the non-standard port
80. This Elasticsearch cluster is a shared resource and all documents indexed in it can be read by anonymous users from within Toolforge. Write access needed to create new indexes, and store or update documents requires a username and password.
See full documentation at /Elasticsearch for more information.
Dumps
The 'tools' project.
Celery
It is possible to run a celery worker in a kubernetes container as a continuous job (for instance to execute long-running tasks triggered by a web frontend). The redis service can be used as a broker between the worker and the web frontend. Make sure you use your own queue name so that your tasks get sent to the right workers.
Troubleshooting
If you run into problems, please see the § Contact section. Specifically, please feel free to come into #wikimedia-cloud connect. The cloud mailing list is another good place to ask for help, especially if the people in chat are not responding.
Backups
What gets backed up?
The basic rule is: there is a lot of redundancy, but no user-accessible backups. Toolforge users should make certain that they use source control to preserve their code, and make regular backups of irreplaceable data. With luck, some files may be recoverable by Cloud Services administrators in a manual process. But this requires human intervention and will likely not rescue the file that was created five minutes ago and deleted two minutes ago. If necessary, ask on IRC or file a Phabricator task.
|
https://wikitech.wikimedia.org/wiki/Help:Move_your_bot_to_Labs
|
CC-MAIN-2019-30
|
refinedweb
| 1,852
| 56.15
|
-
The Basics of Validation
There is another concept in XML that is just as important, if not more so, than well-formedness: validation. The idea behind validation is to create a document with defined structure and rules for how the content is to be organized. Then, by checking the document against the set of rules, the document can be declared valid or an error can be generated, indicating where the document is incorrectly formatted or structured.
The document that establishes the set of rules is called a schema, with a lowercase s. The terminology here can become somewhat confusing, because a schema in the generic sense is just a set of rules that define the structure. However, with XML, there are two common types of schemas that are used to support validation: Document Type Definitions (DTDs) and XML Schemas.
Within both DTDs and XML Schemas, you can establish rules for what elements and attributes may be used in your XML documents, as well as define other resources, such as declaring any entities to be used in your documents.
After the schema (in the form of a DTD or XML Schema) is written, the schema is then linked to your XML document. In the case of a DTD, this is accomplished with a DOCTYPE declaration in the document. When the document is read by a parser that supports validation, or a validating parser, the document is checked against the rules contained in the DTD. If the document fails to comply with the rules, then an error is generated. If the document complies with the rules in the DTD, then it is valid. A similar mechanism is employed to link an XML Schema with a document, and the result of validation is the same: A document that meets the validity constraints is valid. The specifics of DTDs are discussed at length in Chapter 4 and the specifics of XML Schemas are discussed in Chapter 5.
Validating an XML document provides many benefits. Validation can provide a mechanism for enforcing data integrity. It can be a method for expediting searching or indexing. It can also help manage large documents or collaborative documents that might be broken into chunks for editing purposes.
All of these issues, and many more, make validation one of the more powerful tools of XML.
Document Type Definitions: A Glimpse
One common mechanism for validating XML documents is the Document Type Definition. An XML document is valid if it has an associated document type declaration and if the document complies with the constraints expressed in it.
The document type declaration is the statement in your XML file that points to the location of the DTD. For example:
<?xml version="1.0" ?> <!DOCTYPE document SYSTEM "example.dtd"> <document></document>
Here we have a document called document, which is linked by the document type declaration to the example.dtd file. This means that to be valid, the document would need to match all the rules established in that DTD.
Likewise, the document type declaration can also include the rules itself, rather than pointing to an external DTD:
<?xml version="1.0" ?> <!DOCTYPE document [ <!ENTITY legal "This document is confidential."> <!-- More rules would be included here --> ]> <document> &legal; </document>
In this case, you would include the same rules in this form as you would have in the external DTD. This can be very useful for keeping your files linked to the rules, or for including a few simple entity declarations. There are advantages to including your declarations in the internal DOCTYPE, or in pointing to an external DTD. We will discuss those issues more in Chapter 4.
XML Schemas: A Glimpse
Document Type Definitions are actually a holdover from SGML, and as such, there have been many critics of DTDs with respect to XML. The first problem with DTDs is that they use their own special syntax, which is not very intuitive to many authors. The second problem is that DTDs themselves are not well-formed XML. Finally, DTDs do not provide a mechanism for defining complex datatypes, which limits some of the potential of XML.
In response to the limitations of DTDs, the W3C has developed a schema mechanism that is specific to XML: XML Schemas. XML Schemas use a (somewhat) more intuitive syntax, and are actually XML documents themselves. This makes it easier for XML developers to integrate XML Schema support into applications.
Additionally, XML Schemas provide a means for defining datatypes for elements and attributes. Datatypes allow you to restrict the content of elements and attributes to specific types of data, such as a digit, a date, or a string. This is a very powerful new aspect of schemas that was not possible with DTDs. We will discuss XML Schemas and datatypes at length in Chapter 5.
XML Schemas are also external files, which are linked to XML documents through a couple of special attributes:
<?xml version="1.0" ?> <document xmlns: </document>
The first attribute is the xmlns:xsi, which defines the namespace for the xsi:noNamespaceSchemaLocation attribute, which is actually used to point to the location of the schema.
XML Schemas can also be included within an XML document by making use of namespaces (which are discussed in detail in Chapter 6). Because XML Schemas are XML, they can be included directly in the document:
<?xml version="1.0" ?> <document xmlns: <xs:schema xmlns: <xs:element <!-- More schema rules defined here --> </xs:schema> <title>My Document</title> </document>
We will discuss the mechanisms for writing and including XML Schemas in your XML documents in greater detail in Chapter 5; however, you should keep XML Schemas in mind for validation. XML Schemas are easier to author and offer more power and flexibility than DTDs. However, because DTDs are essentially a part of XML 1.0, in which Schemas are a separate Recommendation, there may be more application support for DTDs until Schema usage becomes more widespread.
Another point to keep in mind: Validation is not necessary. Well-formed XML can be used in many applications without any problems whatsoever. However, validation can be a valuable tool, and it is important to consider this idea of validation. If you are using XML as a data format then validation can really be an important asset. By using a DTD or Schema for validation, you can enforce your markup language's rules for others authoring XML instance documents.
Validation can also be used to make sure that users do not corrupt the data being stored in your XML files. This is perhaps the most important reason for validation: It enables you to enforce some degree of data integrity.
|
http://www.informit.com/articles/article.aspx?p=27865&seqNum=14
|
CC-MAIN-2019-30
|
refinedweb
| 1,105
| 60.95
|
Teleprompter displays scripts formatted as HTML, Markdown, or plain text, scrolling the script consistently. A second web browser can be attached to a "control" view for that script, able to control or stop the scroll speed and jump back to the top of the script. Each scrolling view can be adjusted to conform to different equipment needs.
npm install -g teleprompter. If you get an
EACCESSerror at the end, you may have to run
sudo npm install -g teleprompterinstead.
cdcommand to change the current working directory to where your scripts live. For example, if your scripts are located at
/Users/schoon/Scripts, you'd type
cd /Users/schoon/Scripts.
teleprompter.
A setup that has worked well in studios so far has included the following:
teleprompter.
Teleprompter expects to be run against a directory of text files. If unspecified, the current working directory is used instead.
Usage: teleprompter [OPTIONS] <scripts>Options:<scripts> A directory of scripts to load. [Default: .]--help, -h Print usage information, then exit.--port, -p Specify the port to listen on. [Default: 8080]
Teleprompter will run a web server at the configured port, so ensure the port is available to use. A reasonable default, 8080, has been provided, and shouldn't collide with other applications. (If it does collide, you probably know well enough to change it.)
For best results, use the newest browser supported by your platform. For iOS and Android, this will probably be Safari and Android, respectively. For older phones and tablets, however, a more up-to-date browser may be available through the App Store and Play Store. When filing issues, please let us know the browser you ran into trouble with, and we'll see what we can do!
Once Teleprompter is running successfully, a web browser can be pointed at any
of the routes listed below, replacing
{script} with the name of the script's
file, excluding the extension. For example, if you want to load the file
glow-cloud.md or
dog-park.html, you would use
glow-cloud or
dog-park in
the URL, respectively.
Two additional routes exist for internal synchronization of Control and Script
displays. Any event posted to the Publish route is sent to all clients attached
to the Subscribe socket, where
{namespace} is any string, generally the name
of the script being displayed.
These events arrive in the form of JSON with a
type field (in addition to the
text/event-stream-provided
event field) and additional metadata.
The table below lists the available events and their desired effects.
|
https://www.npmjs.com/package/teleprompter
|
CC-MAIN-2017-39
|
refinedweb
| 423
| 65.42
|
an example of tag file
/WEB-INF/tags/sample.tag
{{{
<%@ taglib prefix="c" uri="" %>
<%@ attribute name="foo" required="true" %>
<%@ attribute name="bar" %>
<%@ attribute name="baz" %>
<div>
<div>foo: ${foo.toString()}</div>
<div>bar: ${bar.toString()}</div>
<div>baz: ${baz.toString()}</div>
</div>
}}}
an example of jsp file calling tag without "bar" and "baz" attributes
{{{
<%@ taglib prefix="tags" tagdir="/WEB-INF/tags" %>
<html>
<body>
<h2>Example of Uninitialized Tag Attributes</h2>
<tags:sample
</body>
</html>
}}}
I found that the uninitialized attributes were NOT set into context in Java files generated from tag file.
Besides, when the uninitialized attributes were called in EL, ScopedAttributeELResolver tries to resolve attributes name as java class names.
ref. getValue method in
I have already confirmed that the uninitialized attributes actually affect performance.
I got jfr to compare between there are uninitialized attributes or not.
It was confirmed that lock wait occurs in java.util.jar.JarFile class and sun.misc.URLClassPath class only when there are uninitialized attributes in tag.
the related bug
Performance issue evaluating EL in custom tags (tagx) due to inefficient calls to java.lang.Class.forName()
When Tomcat parses "baz.toString()" Tomcat has no way to determine if "baz" is a class name and "toString" is the name of a static method or if "baz" is an attribute and "toString" is an instance method. Tomcat has to do the class lookup to find out.
The optimisation that was implemented in bug 57583 does not apply in this case.
In this simple case removing the "toString()" calls would improve performance (it would allow the optimisation from bug 57583 to work) as there is an implicit toString() in the EL evaluation.
In a more complex case the only way for an application to ensure it avoids the expensive lookup is to use an explicit scope with the attributes. i.e.:
<div>foo: ${pageScope.foo.toString()}</div>
etc.
Generally, explicitly stating the scope is the best approach. It allows faster execution and does not depend on any container specific optimisation.
I've been taking a look at the code to see if there is anything further that can be done in terms of optimisation. There are additional special cases we could handle in a similar manner to bug 57583 but every special case adds complexity and maintenance overhead. I'm not sure at this point if the additional benefits are worth the cost. I need to run a few tests.
Created attachment 35975 [details]
Patch for potential optimisation
Whilst I remember, using explicit imports rather than wildcard imports will improve performance as well.
I am attaching a patch the implements an optimisation to reduce the need to class loader lookups for the standard packages imported by all JSPs and tag files.
The overall performance improvement is modest (about a factor of 3 for the resolution). I'm not convinced that the performance improvement justifies the ongoing maintenance. The classes need to be kept in sync or things could break. That is a significant negative in my view.
At the moment, using explicit scopes looks like a much better solution.
Moving this to an enhancement request as this is a performance optimisation not incorrect behaviour.
(In reply to Mark Thomas from comment #2)
> Created attachment 35975 [details]
> Patch for potential optimisation
> standardPackages.put("javax.jsp.servlet", servletJspClassNames);
A typo. The package name should be "javax.servlet.jsp".
> // Standard package where we know all the class names
New classes can be added to java.lang in future versions of Java. I see from a comment that you worked from Java 11 EA javadoc, but it is still fragile.
.)
(In reply to Konstantin Kolinko from comment #4)
> .)
May the jar file names containing those packages are reasonabe stable and one could instead open them as zip files and list the class files they contain with the wanted packages? Just a quick thought.
That typo is significant. With the typo fixed the performance difference is 4.6s vs ~50ms - two orders of magnitude difference. I think that probably changes the balance.
Future additions to java.lang concern me too. I took a quick look at automatic generation / tracking but opted for the manual version (from the Javadoc as it happens) for this patch.
Given the performance improvement, some more thought on how to maintain this reliably is called for.
The JAR file name could work - as long as rt.jar is in an expected location. I think the location is stable relative to JAVA_HOME. We'd still need to filter public / non-public classes but that should be easy enough.
I think that the feature of JSP Tag files reported in this BZ ticket is rather confusing. In my opinion, if I have a tag file that declares
<%@ attribute name="baz" %>
then evaluating "${baz}" should return the value of that attribute that I declared. If the tag file was called without any value for the attribute, return null, without falling back to outer scopes (requestScope.baz, sessionScope.baz, applicationScope.baz).
This fallback behaviour is rather odd.
Specification text, JSP2.3MR.pdf - "JSP.8.3 Semantics of Tag Files" page 209/594:
[quote]
For each attribute declared and specified, a page-scoped variable must be created
in the page scope of the JSP Context Wrapper, unless the attribute is a deferred
value or a deferred method, in which case the VariableMapper obtained
from the ELContext in the current pageContext is used to map the deferred
expression to the attribute name. The name of the variable must be the same
as the declared attribute name. The value of the variable must be the value of
the attribute passed in during invocation. For each attribute declared as optional
and not specified, no variable is created. [...]
[/quote]
If we deviate from the specification, and disable the fallback to outer scopes, technically it can be done in Jasper, without changing the EL implementation:
When EL evaluates an identifier, it consults the VariableMapper first, before looking into `ELResolver`s.
So if one registers such attribute in the VariableMapper as a ValueExpression that evaluates to the constant value of null, the fallback to `ELResolver`s will not happen.
Created attachment 35990 [details]
Updated patch with test cases to check class lists
Given the general approach of Java EE (which I assume will continue in Jakarta EE) to backwards compatibility, I don't think the behaviour required by the specification is going to change.
The performance implications of adding imports to EL were not realised at the time. They are now part of the spec and, for the same reason as above, I don't think that will change.
Using a non-specification compliant work-around is an option. The work-around proposed looks reasonable to me. However, experience tells me that it is hard to judge what proportion of users would benefit from it vs would see failures with it. Clearly, it would be optional. The question would be whether to enable it by default or not.
One of the benefits of the Java 9 module system is that it is possible to enumerate all the classes in java.lang. I have attached an updated test case that does this and also checks the Servlet/JSP spec classes too.
The patch is undoubtedly a hack but with the test cases ongoing maintenance is minimal and users are saved the hassle of having to go through their code and add explicit scopes everywhere.
Unless there are objections, I plan to commit this to trunk.
One more idea:
The public classes in Java Platform API all follow the naming conventions and start with an uppercase character A-Z.
Attribute names usually start with a lowercase character.
That sounds like the best hack ever.
That is a neat hack. The downside is that it isn't as complete a fix. It wouldn't help users where the attribute name started with an upper case letter although I suspect this is fairly rare.
Given that we have a patch for a more complete fix and that the code that runs on every lookup isn't that different, I'm leaning towards applying the attached patch.
Fixed in:
- trunk for 9.0.11 onwards
- 8.5.x for 8.5.33 onwards
Note: 7.0.x and earlier not affected.
|
https://bz.apache.org/bugzilla/show_bug.cgi?id=62453
|
CC-MAIN-2019-30
|
refinedweb
| 1,379
| 65.01
|
Post here for problems and discussion of the kaggle-cli tool
I'm getting this error when trying to submit using the kaggle cli tool: $ kg submit 'output.csv' -c 'dogs-vs-cats-redux-kernels-edition'Starting new HTTPS connection (1):'NoneType' object has no attribute 'find'
Has anyone seen this before/know how to fix it? I tried pip upgrading kaggle-cli, but that didn't help.
Do you have a Kaggle account, and did you set your username and password information (i.e., "kg config -g -u username -p password -c dogs-vs-cats-redux-kernels-edition") ?
username
Yes, I did the username and pass config. It was working just the minute before this error started showing up. I think I've figured it out though.. I've probably hit the max number of submissions per day cap for that competition and this the error you get when that happens?
Could be - try using the FileLink() trick I showed in the last class to download a file that you can then submit using your browser. That way you'll be able to see any errors that occur.
Thanks. The problem was that I hit the cap for the day. Now that the UTC day has rolled over, it is working again.
Did anyone figure out how to use kaggle-cli if you logged into kaggle through google.
GNU nano 2.5.3 File: ../download_data.py
import requests
# The direct link to the Kaggle data set
data_url =raw_input('Please enter the url for the data: ')
print "To download data from",data_url
# The local path where the data set is saved.
local_filename = data_url.split('/')[-1]
print "To save as",local_filename
# Kaggle Username and Password
kaggle_info = {'UserName': "username", 'Password': "password"}
# Attempts to download the CSV file. Gets rejected because we are not logged in.
r = requests.get(data_url)
# Login to Kaggle and retrieve the data.
r = requests.post(r.url, data = kaggle_info)
# Writes the data to a local file one chunk at a time.
f = open(local_filename, 'w')
for chunk in r.iter_content(chunk_size = 512 * 1024): # Reads 512KB at a time into memory
if chunk: # filter out keep-alive new chunks
f.write(chunk)
print chunk
f.close()
I have used this trick with Facebook, expect it to work with Google, too. Ask kaggle to reset your password. You will recieve e-mail with three links, the last one allows you to setup username/password pair. Then just use this data with kaggle cli as normal.
I used w3m browser on my aws instance was much easier...install with apt-get install w3m and then use commad w3m "url" to get to kaggle and DL
I tried to to that, I was able to log in with my password and the website looked like this:
However, when I go to most of the pages, the page will load and will only display this:
I do not know how to proceed as there are no links displayed. Is there another way to get data to AWS?
As of today, Kaggle seems to have changed html of their pages so kg donwload will not load data even with the proper setup. There is a new opened issue on GitHub, which was not resolved yet.
kg donwload
Do you have any suggestions on how to get data to AWS from a local machine or directly from Kaggle?
@maxim.pechyonkin have you tried this hack ?
The issue has been patched, it works now.
Yep, just need to upgrade the kaggle-cli package:
pip install --upgrade kaggle-cli
Heads up for people who run into error messages like the following
reduce() of empty sequence with no initial value
I did some digging around the kaggle-cli repository and found this issue where the author mentioned that he fixed this in the latest version. It looks like the issue was resolved 7 days ago, so in case anyone is running an older version of kaggle-cli like I was, you can run pip install --upgrade kaggle-cli to fix this issue as well.
kaggle-cli
This worked for me, I was getting this error "Node Type is missing find-all" trying to donwload the competition.Youre suggestion to upgrade fixed it.Thanks!!
I have used the same trick. Now I use the username and password in kg config command. But when I use kg download, it says ->
Starting new HTTPS connection (1): was an error logging in: The username or password provided is incorrect.
However, I can login to the kaggle website directly using the same user name and password. Any idea what could be the issue?
Even after doing kg config -g -u username -p password -c dogs-vs-cats-redux-kernels-edition
I had to pass username and password in kg download command again to resolve this issuekg download -u username - p password -c dogs-vs-cats-redux-kernels-edition
I'm still having issues downloading cats vs dogs using the kaggle-cli. Each time I attempt to download I get:Warning:download url for file test.zip resolves to an html documentrather than a downloadable file.
and then the download aborts. Any suggestions?
|
http://forums.fast.ai/t/kaggle-cli-issues/177
|
CC-MAIN-2017-51
|
refinedweb
| 859
| 71.75
|
Bigtop::Docs::Cookbook - Bigtop syntax by example
This document is meant to be like the Perl Cookbook with short wishes you might long for, together with syntax to type in your bigtop file and what that produces. In addition, many sections start with a simple question about what gets built by the backend in question.
This document assumes you will be editing your bigtop file with a text editor (it was written before tentmaker). You may also choose to maintain your bigtop file with tentmaker. Generally, the advice here governs what values you put in the boxes at the far right side of the Backends tab in tentmaker. Some of the other advice must be applied on the App Body tab. See Bigtop::Docs::TentTut to get started with tentmaker or Bigtop::Docs::TentRef for full details on using it.
For full syntax consult Bigtop::Docs::AutoKeywords and/or Bigtop::Docs::Syntax along with Bigtop::Docs::AutoBackends. You could also run tentmaker which displays the same things as the Auto Docs, but in an organized way in a browser.
The questions are in sections. Here is a complete list of sections and questions:
CGI
The two main paths to laziness are tentmaker (see Bigtop::Docs::TentTut) and the bigtop script itself -- with the proper command line parameters.
Suppose you have a little data model:
+--------+ +--------+ +-------------+ | child |----->| family |<-----| anniversary | +--------+ +--------+ +-------------+
You could start your app like this:
bigtop --new Contacts \ 'child(name,birth_day:date)->family(name,phone,+email) anniversary(anniv_date:date,preferred_gift=money)->family'
The string in single quotes is a 'kickstart' description of the data model. Column names go in parentheses. Types default to strings, if you need something else use a colon as for
birth_day in the child table. Indicate optional fields with a leading plus sign. Specify literal defaults with an equal sign as for anniversary
preferred_gift.
In addition to the columns listed, each table will have an integer primary key called id and two dates fields: created and modified. You may remove those after initial generation is you like, but eliminating the integer id makes using an Object Relational Mapper (ORM) harder.
For slightly different discussion and instructions for on building an app directly from an existing PostgreSQL 8 database, see
Bigtop::Docs::QuickStart.
Suppose that you want to add some tables to the app from the previous question, here's all you need to do (from the Contacts directory):
bigtop --add docs/contacts.bigtop 'family<->job(title,description)'
So, you can use a 'kickstart' even if you have already started. Then it will kick start the addition of tables.
This will add a new table for jobs and a many-to-many relationship between it and the existing family table. It will also rebuild the app. Specify columns for new table as in the previous question. Existing tables will get new foreign keys, but can't be changed in any other way from the command line. Note that you will need to alter your database before restarting the application. Use the new table(s) and foreign key(s) in docs/schema.YOUR_DB_NAME to make the additions.
This question could talk about two things. If you are using bigtop with the --new (-n) or --add (-a) flags, you might want to provide table names and their relationships while listing table columns and their SQL types. To control these defaults use the kickstart syntax described in Bigtop::ScriptHelp::Style::Kickstart.
To control defaults like author names or copyright statements, keep reading here.
In normal use, I invoke the bigtop script to build new applications with the -n flag:
bigtop -n NewApp file.kickstart
Again, see Bigtop::ScriptHelp::Style::Kickstart for what to put in the kickstart file.
When bigtop builds an application like this, it uses a default bigtop stub so that the result will run once the SQLite database is in place. You may replace the default stub with one of your own. Simply put a file called
.bigtopdef in your home directory. Your .bigtopdef file must be a valid bigtop file, but you may put any valid bigtop commands in it. This allows you to control defaults like authors and copyright statements. You can use it to specify
mod_perl 2 as the default engine. You could even include a default table in every app.
Bigtop will use .bigtopdef if it is present in your home directory. Then, it will augent it based on the command line request given to -n flags. The same .bigtopdef is used when you start tentmaker with the -n flag.
If your config includes:
config { Init Std {} }
bigtop will generate the following regular files:
Build.PL Changes MANIFEST MANIFEST.SKIP README
It also makes the following directories:
docs lib t
It will try to put the bigtop file into the docs directory (but it won't overwrite it, if its already there). Note that Init doesn't put things into the lib or t directories.
Everything Init::Std builds is a stub (and will never be overwritten), except the MANIFEST.
Once upon a time, Init Std was kind of stupid. It would rewrite all of its files everytime, unless you asked it not to. Now, it thinks of all of its files, except the MANIFEST, as stubs. That means, it will no longer write README, Changes, Build.PL, or MANIFEST.SKIP, unless they are missing from the disk.
Because of history, there are now two ways to turn off MANIFEST updating. As with all backends, you can prevent all regeneration:
Init Std { no_gen 1; }
But you may also be explicit:
Init Std { MANIFEST no_gen; }
When the MANIFEST is regenerated, Init Std uses the same method as both MakeMaker and Module::Build. So, you could do it yourself with:
./Build manifest
That is independent of whether bigtop updates MANIFEST.
There is no special backend for making stand alone servers, but there is a way to generate them for Gantry:
To get a stand alone server, do just what you would for a CGI app, but add the with_server statement to the CGI backend block in the config section:
config { engine CGI; Init Std {} CGI Gantry { with_server 1; } } app Name { config { variable_1 value; variable_2 `multi-word value`; overriden global; } controller SubPage { rel_location subpage; } }
This yields a CGI script as normal and app.server which can be executed directly (it requires HTTP::Server::Simple). Here is a simplified version of what you get:
#!/usr/bin/perl use strict; use CGI::Carp qw( fatalsToBrowser ); use Name qw{ -Engine=CGI -TemplateEngine= }; use Gantry::Server; use Gantry::Engine::CGI; my $cgi = Gantry::Engine::CGI->new( { config => { variable_1 => 'value', variable_2 => 'multi-word value', overriden => 'global', }, locations => { '/' => 'Name', '/subpage' => 'Name::SubPage', }, } ); my $port = shift || 8080; my $server = Gantry::Server->new( $port ); $server->set_engine_object( $cgi ); $server->run();
The actual version includes option handling to allow command line control of which DBD, database user, and database password.
This server binds to port 8080 by default. To change the port, add the server_port statement:
CGI Gantry { with_server 1; server_port 9999; }
This will change the script in only one place:
my $port = shift || 9999;
As you can see, users can supply a port on the command line when they start it.
While you could edit your stand alone server, that removes the fun of letting bigtop keep it up to date. Here's how to switch databases. There are really two approaches: (1) specify the database connection info with command line flags or (2) put the database connection info into a named config block and choose that with command line flags.
The first approach is for those with more impatience than laziness. But, remember that laziness is the chief virtue. If you have a database built, you can specify it (even if Bigtop wouldn't support SQL generation for it):
./app.server -d DBDName -n dbname -u username -p password
To make that specific, suppose I have a PostgreSQL database called 'littledb' which a user called 'bobby' is allowed to access with password 'valentine':
./app.server -d Pg -n littledb -u bobby -p valentine
Yes, that is a lot of typing. Usually, you do it once and use up arrow to find the command again every time you need a restart. Still, the other way is cheaper on the keystrokes. It just requires a bit of up front work.
In Bigtop files, you may have as many config blocks as you like. Among other things, these include database connection information. To specify the last example database add a config block like this:
config littledb { dbconn dbi:Pg:dbname=littledb dbuser bobby dbpass valentine }
Note that the name of the config block is arbitrary, except that 'base' is reserved as the internal name for the (normally) unnamed block. With that config block in place, you may regen:
bigtop docs/yourapp.bigtop all
and then start the app server, asking it to use the new config block:
./app.server -t littledb
The main idea behind the named config blocks is to allow this sort of quick shift. It also works well for dev vs. qual vs. prod.
SQL backends make docs/schema.* (where * is for your database engine, like postgres) in the build directory. It should be ready for direct use to create your database.
Note that unlike other backend types, you can build with all of the SQL backends concurrently. They write different files. They also do a bit of interpretation to handle differences in their SQL syntax.
Tables are made with blocks:
table name { #... }
Inside the braces you need may specify the table's sequence and its fields:
table name { sequence name_seq; field id { is int4, primary_key, auto; } field name { is varchar; } }
Include
primary_key as one of the attributes of the is statement for the field (see above or below). This will add
PRIMARY KEY in the schema.*, but will also show Model backends that the field is primary.
Here is a table with several types of fields:
table invoices { sequence invoices_seq; foreign_display `%number`; field id { is int4, primary_key, assign_by_sequence; } field number { is int4; label `Number (example: COM-12)`; html_form_type text; html_form_constraint `qr{^\w\w\w-\d+$}`; } field status_id { is int4; label Status; refers_to status; html_form_type select; } field paid { is date; label `Paid On`; date_select_text `Popup Calendar`; html_form_type text; html_form_optional 1; } field customer_id { is int4; label Customer; refers_to customers; html_form_type select; } field has_good_default { is varchar; label `Replace as Desired`; html_form_type text; html_form_default_value `avalue`; } field notes { is text; label `Notes to Customer`; html_form_type textarea; html_form_optional 1; html_form_rows 4; html_form_cols 50; } }
Note that int4 will be converted into a reasonable integer type for your database, even if it doesn't use that as a keyword.
The foreign_display statement controls how rows from this table appear when other tables refer to them. This is available through the model's foreign_display method:
my $show_to_user = $invoice_row_object->foreign_display();
Each field that might appear on the screen should have a label which the user will see above or next to the values. It becomes the column label when the field appears in a table. It appears next to the entry field when the user is entering or updating it.
Including the refers_to statement implies that the field is a foreign key. Whether this generates SQL indicating that is up to the backend. None of the current backends (Bigtop::SQL::Postgres, Bigtop::SQL::MySQL, or Bigtop::SQL::SQLite) generate foreign key SQL. But, using refers_to always affects the model. For instance, Bigtop::Model::DBIxClass generates a belongs_to call for each field with a refers_to statement. Other Model backends do the analogous things.
The date_select_text is shown by Gantry templates as the text for a popup calendar link. See the discussion of the LineItem controller in Bigtop::Docs::Tutorial for details. You might also want to check 'How can I let my users pick dates easily?' in Gantry::Docs::FAQ to see what bigtop generates.
All of the statements which begin with html_form_ are passed through to the template (with html_form_ stripped). Consult your template for details. The Gantry template is form.tt. Note that html_form_constraint is actually used by Gantry plugins which rely on Gantry::Utils::CRUDHelp. This includes at least Gantry::Plugins::AutoCRUD and Gantry::Plugins::CRUD. These constraints are enforced by Data::FormValidator.
Use html_form_default_value if you want a default when the user and the database row haven't provided one.
Sometimes it's useful to put some data into the database during creation. Two types that spring to mind are test data and standard constants. To include such data add data statements to the table block:
table status_code { #... data name => `Begun`, descr => `work in progress`; data name => `Billed`, descr => `invoice sent to customer`; data name => `Paid`, descr => `payment received`; }
Notes: (1) you should not set the id if your table has a sequence or is auto-incrementing the primary key (and it should one do or the other). (2) remember to surround the values with backquotes if they have any characters Perl wouldn't like in a variable name (it's always safe to have backquotes around values, even if they aren't strictly needed, think of them like the comma after the last item in a Perl list). (3) you can use as many data statments as you like, each one makes an SQL statement:
INSERT INTO status_code ( name, descr ) VALUES ( 'begun', 'work in progress' );
Note that tentmaker cannot insert, update, or delete data statements. But, if you have them in your file, it will not harm them. To get around this tentmaker limitation, you need to create literal SQL blocks with INSERT statements in them. See the next question, for a discussion of literal SQL blocks.
At any point in the app section, you may include a literal SQL statement:
literl SQL `CREATE INDEX name_ind ON some_table ( some_field );`;
There are a couple of things to notice here. First, enclose all of your literal content in backquotes. It will only be modified in one way. If it doesn't end in whitespace, one new line will be added to it. Otherwise, you are on your own.
Second, there are two semi-colons here. The one inside the backquotes is for SQL, the one outside is for Bigtop. The later semi-colon is always required. It's up to you to make sure the syntax of your literal SQL code is correct (including determining whether it needs a semi-colon).
If you want a trailing empty line, do this:
literl SQL `CREATE INDEX name_ind ON some_table ( some_field ); `;
All trailing whitespace is taken literally. If you include any, no extra new line will be added.
The order of SQL generation is the same as the order in your Bigtop file. For example, since the index creation above must come after some_table is defined, put the literal statement after some_table's block.
You may use literal SQL statements as a way to work around tentmaker's inability to handle table level data statements. Simply put your INSERT statements into a literal SQL statement after the table's block.
Use a sequence block:
sequence name_seq {}
This will generate:
CREATE SEQUENCE name_seq;
in schema.*. Note that blocks for sequences must currently be empty. Eventually they should support min and max values, etc.
Most databases don't use sequences. Of the databases supported by bigtop, only Postgres has them. Even for Postgres, we don't typically use them any more.
CGI backends make a single CGI based dispatching script called app.cgi directly in the build directory. You will have to copy it to your cgi-bin directory and make sure the copy there is executable. If you use the with_server statement in the CGI backend block, they will also make app.server. You may run it as a stand alone web server, which is especially useful during testing.
This question does not represent best practice any more. See the next question which explains you to use Gantry::Conf.
Specify CGI configuration values with config blocks as you would for mod_perl apps:
app SomeApp { config { dbconn `dbi:Pg:dbname=appdb` => no_accessor; dbuser `someone` => no_accessor; dbpass `not_tellin` => no_accessor; page_size 15; } }
These become config hash members:
my $cgi = Gantry::Engine::CGI->new( config => { dbconn => 'dbi:Pg:dbname=appdb', dbuser => 'someone', dbpass => 'not_tellin', page_size => 15, } );
Note: if you don't use Gantry::Conf, all config parameters for your CGI script must be at the app level and they will only appear in the config hash of the Gantry::Engine::CGI object.
To use Gantry::Conf with CGI scripts, do two things. First, use the Conf Gantry backend, telling it the instance name of your app. Second, set gantry_conf in the CGI backend block:
config { #... Conf Gantry { instacne `your_name`; } CGI Gantry { gantry_conf 1; } }
The instance will be the name of the app's instance in your /etc/gantry.conf. If your master conf lives in a different file, use a block like this instead:
config { #... Conf Gantry { instance `your_name`; conffile `/etc/my_hidden_conf/master.conf`; gen_root 1; } CGI Gantry { gantry_conf 1; } }
If you use a SiteLook backend, you probably want to set
gen_root in the Conf Gantry backend, so it will manufacture a path to your wrapper and other templates.
Many times config info varies depending on environment. For instance, in production you may need to connect to a different database. Named config blocks help with that. Example:
config { # all common config here rows_per_page 25; } config dev { dbconn `dbi:SQLite:dbname=app.db`; } config prod { dbconn `dbi:Pg:dbname=proddb;host=db.example.com`; dbuser someuser; dbpass `$ecr3t`; }
With the Conf Gantry backend, this will lead to a single config file with three instances. Each will begin with the instance prefix from the Conf Gantry backend config block ('your_name' in the example above). The unnamed block will have that instance name. The others will have it as a prefix with their config block name (a.k.a. their config type) as a suffix. So the instance names will be 'your_name', 'your_name_dev', and 'your_name_prod'.
How you access these depends on how you deploy the app. In the stand alone server (as we saw in a previous question), you can use the -t flag:
./app.server -t dev
In CGI, the script uses the config block named CGI or cgi without additional help, though you might want to edit the generated CGI script to alter where it looks for the master conf file. For mod_perl, see below.
The locations your CGI script can manage will come from your controllers. Each controller should have either a location or a rel_location directive. locations are used as is, rel_locations have the location for the app prepended. Note that the app location is optional and defaults to '/'. Do not start or end locations or rel_locations with / (except that the app level location can be '/').
app MyAppName { location `/mysubsite`; #... table definitions here controller SomeTable { rel_location `sometable`; } controller Odd { location `/pretends/to_be/part_of/other/app/odd`; } }
For the Gantry CGI backend, this leads to the following excerpt in app.cgi:
my $cgi = Gantry::Endgin::CGI->new( locations => { '/mysubsite' => 'MyAppName', '/mysubsite/sometable' => 'MyAppName::SomeTable', '/pretends/to_be/part_of/other/app/odd' => 'MyAppName::Odd', }, );
HttpdConf backends make docs/httpd.conf suitable for use in a mod_perl apache conf file or as the value of an Include statement there.
The answer to this question no longer represents best practices. See the next question for how to use Gantry::Conf instead.
Use config blocks to specify PerlSetVars:
config { engine MP13; # You could use MP20 instead of MP13. Init Std {} HttpdConf Gantry {} } app Name { config { variable_1 value; variable_2 `multi-word value`; overriden global; } controller SubPage { rel_location subpage; config { overriden subpage; } } }
Note that the SubPage controller includes its own value for the overriden variable. This results in a PerlSetVar statement in the location block for this controller. The app level config block results in three PerlSetVars appearing in the root location block. Output in docs/httpd.conf:
<Perl> #!/usr/bin/perl use Name; use Name::SubPage; </Perl> <Location /> PerlSetVar variable_1 value PerlSetVar variable_2 multi-word value PerlSetVar overriden global </Location> <Location /subpage> SetHandler perl-script PerlHandler Name::SubPage PerlSetVar overriden subpage </Location>
The Control backend will include these in site object initialization (in the init method) and make accessors for them. Marking them no_accessor prevents both of those things (see Controllers below).
Gantry::Conf allows for all sorts of applications to be configured in all sorts of ways in one place. It allows multiple apps to share configuration information, even if they run on different servers. It allows multiple instances of the same app to use different configuration information, even if they run in the same apache server. See the docs on Gantry::Conf for details on its use.
config { engine MP13; Init Std {} Conf Gantry { instance `your_instance`; } HttpdConf Gantry { skip_config 1; gantry_conf 1; } } app Name { config { variable_1 value; variable_2 `multi-word value`; overriden global; } controller SubPage { rel_location subpage; config { overriden subpage; } } }
The process is very similar for Gantry::Conf as for PerlSetVars. There are a couple of key differences. First, you should add the Conf Gantry backend. Second, you should mark the HttpdConf Gantry backend with gantry_conf, so it won't write PerlSetVars. Finally, you should include the instance statement in the Conf Gantry backend, whose value is the name of your instance in /etc/gantry.conf. If your master config file lives somewhere else, also include conffile in the Conf Gantry backend block:
config { #... Conf Gantry { instance `your_instance`; conffile `/etc/exotic/location/master.conf`; } HttpdConf Gantry { gantry_conf 1; } }
This yields two output files: a shorter httpd.conf and a new Name.conf. Here's docs/httpd.conf:
<Perl> #!/usr/bin/perl use Name; use Name::SubPage; </Perl> <Location /> PerlSetVar GantryConfInstance your_instance </Location> <Location /subpage> SetHandler perl-script PerlHandler Name::SubPage </Location>
Here's docs/Name.gantry.conf:
<instance your_instance> variable_1 value variable_2 multi-word value overriden global <GantryLocation /subpage> overriden subpage </GantryLocation> </instance>
You may need to have a variety of different config setups. For instance, you might need one for dev and a different one for prod. As explained in "How do I specify Gantry::Conf configuration values?", you can have one config block for each deployment. One of them is unnamed (but is called 'base' internally). The others have names you choose. Each becomes and instance. The unnamed one has the instance you chose in the Conf Gantry backend block. The others have that as a prefix and the config block name as a suffix.
You will need to edit the generated httpd.conf to switch configs. Just change the GantryConfInstance PerlSetVar to match the name of the proper instance in the generated docs/App-Name.conf.
There are two ways to put extra things into the generated Perl block, depending on where things should appear. If you need something to come immediately after the #!/usr/bin/perl line (like a use lib), do this:
literal PerlTop ` use lib '/home/myuser/src/lib';`;
As with all literals, you must enclose your content in backquotes and mind your own syntax inside those quotes. You are responsible for whitespace management, except that one new line will be added at the end, if your literal text does NOT have trailing whitespace. So the above will get one new line added to it.
PerlTop blocks always appear in the generated httpd.conf in the order they appear in the Bigtop file and start immediately after the shebang line.
Note that PerlTop may not be soon enough, for statments like
use Apache::DBI, if your httpd.conf has an earlier Perl block. In that case, you must work manually.
If you don't care where the statements fall, you can use a literal PerlBlock statement:
literal PerlBlock `use SomeModule;`;
These and your controller blocks produce output in the order they appear in the bigtop file.
You may include arbitrary things outside of the generated blocks like this:
literal HttpdConf `Include /some/file.conf`;
These appear intermixed with location blocks in the same order as in the bigtop file. All of these come after the <Perl> block.
You may include additional directives in the base location for the app with literal Location statements:
literal Location ` AuthType Basic AuthName "Your Realm" PerlAuthenHandler Gantry::Control::C::Authen PerlAuthzHandler Gantry::Control::C::Authz require valid-user`;
These appear literally immediately below any PerlSetVar statements.
You may include directives in other location blocks by putting literal Location statments inside your controller's block:
controller SecureSubLocation { # ... literal Location ` require group SecretAgent`; }
Gantry controllers usually make two pieces: a stub and a GEN module (but the GEN module will not be made if there are no methods to put in it). The GEN module is designed to be regenerated as changes to the app arise. For this reason, you should not edit the GEN module. Rather, put your code in the stub.
app Apps::Name { #... controller SomeModule { #... } }
This will make Apps/Name/SomeModule.pm and Apps/Name/GEN/SomeModule.pm. You shouldn't need to edit the GEN module. If it is wrong, update your Bigtop file and regenerate.
Use a controls_table statment to associate your controller with a table:
controller SomeTableController { controls_table sometable; }
This has one basic effect: it includes a use statement for the table's model module in your stub and GEN modules. That use statement will import the abbreviated model name. In the example the table has a name like:
package Apps::Name::Model::sometable;
But, it exports
$SOMETABLE as an abbreviation for that package name. So, the generated statement (repeated in the stub and GEN modules) is:
use Apps::Name::Model::sometable qw( $SOMETABLE );
In addition to the basic effect of controls_table, it is also used by methods of type AutoCRUD_form and CRUD_form to make sure the requested fields are available in the controlled table and to find their labels, etc.
Note, that a controller will only control one table as generated. If you need to work with other tables, you'll have to write some code.
If you need a method stubbed in without useful code, you can say:
controller Name { method empty is stub { extra_args `$id`; } }
This will make:
#------------------------------------------------- # $self->empty( $id ) #------------------------------------------------- sub empty { my ( $self, $id ) = @_; }
(Note that extra_args is optional.)
You then fill in the operative bits.
Note that adding stub methods to your Bigtop file once your stub module exists will have no effect, since regeneration never alters existing stubs. To force generation rename or delete the stub module.
Gantry's AutoCRUD supplies do_add, do_edit, and do_delete for simple tables. To use it say
controller Simple is AutoCRUD { method form is AutoCRUD_form { form_name simple fields name, address; extra_keys legend => `$self->path_info =~ /edit/i ? 'Edit' : 'Add'`; } }
This makes the following stub:
package Apps::AppName::Simple; use strict; use base 'Apps::AppName'; use Apps::AppName::GEN::Simple qw( form ); use Gantry::Plugins::AutoCRUD qw( do_add do_edit do_delete form_name ); #----------------------------------------------------------------- # $self->form( $row ) #----------------------------------------------------------------- # This method supplied by Apps::Checkbook::GEN::Trans
Bigtop makes a note in the stub for each method it is mixing in from the GEN module.
Note that both the GEN module and Gantry::Plugins::AutoCRUD are mixins (they export methods). If you don't want their standard methods, don't include them in the import lists. But, if you don't want the ones from Gantry::Plugins::AutoCRUD, you probably want real CRUD (see below).
Gantry's AutoCRUD has quite a bit of flexibility (e.g. it has pre and post callbacks for add, edit, and delete), but sometimes it isn't enough. Even when it is enough, some people prefer explicit schemes to implicit ones. CRUD is more explicit. To use it do this:
controller NotSoSimple is CRUD { text_description `Not So Simple Item`; method my_crud_form is CRUD_form { form_name simple fields name, address; extra_keys legend => `$self->path_info =~ /edit/i ? 'Edit' : 'Add'`; } }
There are only a couple of differences from the AutoCRUD version above. The controller type is just CRUD; the form method is called my_crud_form and has type CRUD_form.
Note that it is important to use a method name that ends in _form, but don't use just _form. The backend says:
my ( $crud_name = $method_name ) =~ s/_form$//;
So using _form as the name (which is required for AutoCRUD) will make Bad Things happen for CRUD.
The above produces a lot of code. I'll show it a piece at a time with running commentary interspersed. It makes a CRUD object:
my $my_crud = Gantry::Plugins::CRUD->new( add_action => \&my_crud_add, edit_action => \&my_crud_edit, delete_action => \&my_crud_delete, form => \&my_crud_form, redirect => \&my_crud_redirect, text_descr => 'Not So Simple Item', );
It makes do_add, do_edit, and do_delete. For example:
#------------------------------------------------- # $self->do_add( ) #------------------------------------------------- sub do_add { my $self = shift; $my_crud->add( $self, { data => \@_ } ); }
(do_edit and do_delete are similar.)
Finally, it provides the callbacks. For example:
#------------------------------------------------- # $self->my_crud_add( $id ) #------------------------------------------------- sub my_crud_add { my ( $self, $params, $data ) = @_; # make a new row in the $YOUR_TABLE table using data from $params # remember to commit }
It also makes my_crud_edit, my_crud_delete, and my_crud_redirect. Note that you don't get actual code for updating your database, just comments telling you what normal people do. Of course, abnormality is one of the main reasons for using CRUD instead of AutoCRUD, so take the comments with a grain of salt.
Note that if you have more than one method of type CRUD_form, the bigtop backend will make multiple crud objects (each named for its form) and the callbacks for those objects. But it will also make multiple do_add, do_edit, and do_delete methods. They will make their calls through the proper crud object, but their names will be duplicated. In that case, you are on your own to change them to reasonable (i.e. non-clashing) names.
The Model GantryDBIxClass backend makes a pair of modules for each table. One is the stub module, the other is the GEN module. Once made, the stub is never regenerated, so put your code in it. The GEN module will be regenerated when you run bigtop.
config { #... Model GantryDBIxClass {} } app Apps::Name { table some_table { #... } }
This makes Apps::Name::Model::some_table (the stub) and Apps::Name::Model::GEN::some_table (the GEN module). Note that the names are exactly the same as the table name. If you want capital letters, use them to name the table.
Due to the way that DBIx::Class binds the methods it makes on the fly, the GEN module mixes in to the stub by using this to start its file:
package Apps::Name::Model::some_table;
So, the disk file is named Apps/Name/Model/GEN/some_table.pm, but the package statement is the same as the one in the stub. This will cause sub redefinition warnings, if you put a sub in the stub with the same name as one in the GEN module. Models generated by Model Gantry inherit from Gantry::Utils::Model, which allows inheritence instead of mixing in. These are the native models.
In addition to regular tables, the Model GantryDBIxClass backend understands the join_table block (which became available in version 0.15). Join tables are needed to support many-to-many relationships like this:
+-----+ +-------+ | job |<-+ +->| skill | +-----+ | | +-------+ | | +-----------+ | job_skill | +-----------+
To express this, add:
join_table job_skill { joins job => skill; }
This will have serveral effects. First, all SQL backends will make the job_skill table with three fiels: id and columns to hold ids for the job and skill tables. Second, the Model GantryDBIxClass backend will make has_many relationships in both the job and skill model modules and put belongs_to relationships for the job and skill tables into the model module for the job_skill table.
The Model GantryCDBI backend makes modules exactly analogous to the Model GantryDBIxClass backend, but for use with Class::DBI. All of the same caveats apply.
We now prefer DBIx::Class over Class::DBI, since the later has difficultly sharing database handles with our older apps, which don't use ORMs.
Each table should have a single column primary key:
table name { sequence name_seq; field id { is int4, primary_key, auto; } }
This will put PRIMARY KEY in the sql for the column and tell the Model backend to make the column primary. This generates:
Apps::Name::Model::name->set_primary_key( 'id' );
or the appropriate analog for your ORM.
Normally Model modules inherit from a Gantry::Utils:: module appropriate for their ORM. You can change that with the model_base_class statement:
table name { model_base_class Gantry::Utils::AuthCDBI; }
The generated output will be the same, except for the base class. The model_base_class need not be in the Gantry::Utils:: namespace.
If most or all your tables need to inherit from a single base class, put it in the backend block:
config { #... Model GantryDBIxClass { model_base_class Exotic::Base; } }
Individual tables can still use the model_base_class statement to override this replacement global default.
To change the behavior of the generated model, put code in the stub or use model_base_class to change what it inherits from.
The Gantry Model backend is simlar to the GantryDBIxClass Model backend. It makes two modules for each table. For example:
table name { #... }
will yield App::Name::Model::name and App::Name::Model::GEN::name. Since these inherit from Gantry::Utils::Model, they don't have problems with binding run time generated methods to the proper package. This leaves them free to use inheritence instead of mixing in. The stub inherits from the GEN module which inherits from Gantry::Utils::Model, so the GEN module begins:
package Apps::Name::Model::GEN::name; use base 'Gantry::Utils::Model;
while the stub begins:
package Apps::Name::Model::name; use base 'Apps::Name::Model::GEN::name';
(actually the stub is also an exporter so it can provide an abbreviated name).
This means that you can safely override methods in the GEN module by simply writing a sub of the same name in the stub.
Summary of inheritence
Gantry::Utils::Model Apps::Name::Model::GEN::name Apps::Name::Model::name
As for the other Model backends, include primary_key in the is statement for the primary column:
table name { field id { is int4, primary_key, auto; } }
This will make an implicit sequence for the id field.
You could use a sequence with the table:
table name { sequence name_seq; field id { is int4, primary_key, auto; } }
Then the auto-increment values will be drawn from the explicit sequence
name_seq.
To change what the GEN model inherits from use the model_base_class statement:
table name { model_base_class Gantry::Utils::Model::Auth; #... }
The base class you specify should respond to the same api as Gantry::Utils::Model (which is a subset of Class::DBI).
You can put this in the backend block if you want to make it the default:
config { #... Model Gantry { model_base_class Exotic::Base; } }
Even if you do that, individual tables can still requset a special base class by supplying the model_base_class statement.
To alter the generated behavior, override the offending method in your stub.
Most backends use TT to generate their output. Those that do default to Inline::TT. That means there is a hard coded template inside their module. To change what these generate, copy the template out of the module. Change whatever you want, except the names of the blocks. Save the result. Then add a template statement to the backend's config block, with its value set to a path to your newly saved template.
If you code your own template and it needs addtional information from the backend, you'll have to modify the backend or write your own. It is not easy to inherit from backends. Rather, you need to copy the backend and rename it. Keep in mind that all backends are sharing the syntax tree package namespaces. This means that your methods need to be uniquely named to avoid redefining methods supplied by other backends.
See Bigtop::Docs::Modules for advice on writing your own backends.
|
http://search.cpan.org/~philcrow/Bigtop-0.38/lib/Bigtop/Docs/Cookbook.pod
|
CC-MAIN-2014-41
|
refinedweb
| 5,919
| 63.39
|
Bioconductor has moved to GIT for contributed packages; the subversion logs are no longer active. The following are the git logs.
This is a list of recent commits to git.bioconductor.org, the master(development) branch of the Bioconductor GIT repository.
This list is also available as an RSS feed (master branch), and RSS feed (release branch)
Updated CITATION.
the legend labels for the discrete heatmaps are correct now
adjust the print message
bump version number
update
version bump
fix vignette engine
Added warning if weight.list not specified for statistics consolidation.
Warn when score column in mappability or reptiming data is not numeric (#35)
Move reptiming tiling from IntervalFile.R to preprocessIntervals (#31).
DESCRIPTION is modified to remove warnings from imported packages. Version is set to 1.99.2
Generic functions are debugged and version moved to 1.99.3.
Updated build ignore list.
version moved to 1.99.2
gitignore & buildignore added.
trying one more time
Resync with rbind() enhancement on DataFrame objects in S4Vectors 0.19.5
Small tweaks to "cbind" and "c" methods for DataFrame objects - The "cbind" method now accepts (and ignores) NULLs in the input. - The "c" method now supports the 'ignore.mcols' argument.
Extra comment.
Streamlined calculateCPM test with log=TRUE.
Updated NEWS.
Removed unnecessary *.Rout.save file.
Updated NAMESPACE to export new functions.
Switched consolidate* function tests to use testthat. Added separate tests for consolidateTests, consolidateOverlaps.
Fixed bugs to pass CHECK.
Added docs for consolidateTests/Overlaps.
Added consolidateTests/Overlaps functions.
Updated docs for new consolidateWindows function.
Refactors window consolidation function.
Added tests for asDGEList, empty scaledAverage.
Docfix.
Updated basic tests, NEWS.
Bumped version, date.
Added docs, tests, minor edit to calculateCPM.
Added calculateCPM convenience function.
Moved tests for scaledAverage to separate file, with testthat.
Transitioned normalization tests to use testthat.
Fixes to examples in docs, depracation warning.
Switched normFactors to a normal function, not S4 method.
Docfix.
Updated some test results.
Updated tests for new normFactors.
Split TMM normalization into normFactors function.
Updated tests for new scaledAverage input.
Minor fix to asDGEList.
Updated documentation.
Small fixups to asDGElist.
Updated filterWindows to pass SEs to scaledAverage.
Switched scaledAverage to accept SE objects directly, deprecated DGEList input.
Merge branch 'master' of git.bioconductor.org:packages/csaw
Updated expected test output for new GRanges show().
Merge branch 'master' of git.bioconductor.org:packages/Trendy updates to vignette
updates to vignette
Updated Description
Resync with enhancement of rbind() on DataFrame objects in S4Vectors 0.19.5
2 fixes and 1 enhancement to the "rbind" method for DataFrame objects Most of the logic that used to be implemented in the "rbind" method for DataFrame objects is now in a new "bindROWS" method for these objects. The "rbind" method for DataFrame objects now is just a thin wrapper around a call to bindROWS(). This new "bindROWS" method for DataFrame objects fixes the following long standing bugs: 1. rbind() now properly handles DataFrame objects with duplicated colnames. Note that the new behavior is consistent with base::rbind.data.frame(). 2. rbind() now properly handles DataFrame objects with columns that are 1D arrays. It also provides the following enhancement: 3. rbind() now supports DataFrame objects with the same column names but in different order, even when some of the column names are duplicated. How rbind() re-aligns the columns of the various objects to bind with those of the first object is consistent with what base:::rbind.data.frame() does.
Updated all functions not to output plots
Improvements of results for sets with a small number of columns + fix for correct order in initializer list
v1.7.1 bump w/ file.exists() bug fix
fixed file.exists() warning
Vignette Engine check enhancement - catch edge case where multiple vignette engines are specified in vignette
version bump
Add 'nodup' argument to selectHits()
version bump
update Bioc maintainer
correct Bioc maintainer
Increment version number after merge
Merge pull request #14 from vjcitn/master simplify by omitting BiocSklearn references, and move away old vignette
getting rid of old vignette that had too many details... saved in inst/oldvig
deal with triple-colon
HSDS_Matrix man page
adding simple example for reading hdf5 in package
purging BiocSklearn
adding HSDS_Matrix.R
Merge branch 'master' of git.bioconductor.org:packages/rhdf5client
drop the BiocSklearn dependence
change exmpl
updt dox; ver bump
updt dox
updt dox
updt dox, fix color_by=NULL
updt dox
store cofactor in metadata
+ extractClusters()
add validity check for `custom_isotope_list`
+opt to specify custom_isotope_list prior to launchGUI
code cleaning
updt dox
fix issue #35: + option to specify custom isotope list
rmv git link
fix path to logo
add logo
updt README.md
updt author details
add .travis.yml
Version numbering is fixed.
NEWS & README added.
Major Changes in MLSeq, version moved to 2.x.y from 1.x.y
Merge remote-tracking branch 'upstream/master'
Add inst/CITATION file, pointing at the F1000 article
emph netSmooth
Add bioconductor and f1000 references to readme
Another...
Another..
Another
More packages to be installed from CRAN and not apt-get (travis-ci)
Another package renaming
More on packages for travis..
Fix package names (in R world camelCased, in apt-get world all lower). For travis build.
More.
Move some packages from r_binary_packages to r_packages to fix travis build
Travis to build on release R
Updated pancreas HVG analysis to use multiBlockVar.
Fixed cluster mention.
Switch to FTP path for GEO resources.
Switched to BiocFileCache for local file management.
Added an ignore file.
Bumped version number.
Merge branch 'master' of git.bioconductor.org:packages/simpleSingleCell
Exposed download scripts in MNN workflow.
Exposed download chunks in miscellaneous workflow.
Bug fix for edge widths.
Exposed download in 10X workflow.
Added layout plot for cluster similarities, exposed download info. Added graph abstraction citation.
Turned off titlecaps in read workflow, exposed download.
Merge branch 'master' of
Code cleanup, elaborated on alternative QC methods.
Merge branch 'master' of github.com:BioinformaticsFMRP/TCGAbiolinks
bug fix
Fix vignette
Remove author from description.
Protect against non-informative blocks in multiBlockVar.
removed devel from travis
updated input parameter in qcrReport
Rename some variables These are leftovers from the renaming of queryLength/subjectLength -> nLnode/nRnode on Feb 22, 2016.
Version bump
Bug fix for error if only 1 column of pData
latest changes to handle more local reports and gene selectors; also update for minor changes in SVG format
Remove defunct functions (closes)
GRbaseCoverage
Make custom plot attributes (col,lty,lwd,label) more robust
Version bump
Bug fixes for minNumRegion & multiple covariates Fixes error that was being thrown for minNumRegion = 3. In addition, updates support to adjust for multiple covariates with adjustCovariates. Restrict to a single covariate for matchCovariate.
Modified error message when Ensembl is down.
updated description file
added additional input checks
updated vignette
updated Vignette and Rcode
Fixed problem with fixrank.
Added missing import.
New version.
Merge branch 'trunk' into devel
bump
update vignette build
fix error when non-abstract geom_gate is added to gs. #33
Version bump
bump version number
update comments in vignette Be more clear about the distinction between genes and transcripts. Note that the example given is simplistic.
optimization of genCountMatrixFromVcf function
Merge pull request #198 from csoneson/furtherfix Get rid of Enhances.
Merge branch 'master' into furtherfix
Removed ExperimentHub from Enhances.
Merge pull request #197 from csoneson/heatfix Fix heatmap issue if all values in the matrix are identical.
Merge branch 'master' into heatfix
Merge pull request #196 from csoneson/standardizetour Standardizetour
Merge branch 'master' into standardizetour
Merge pull request #194 from csoneson/datatable preserve column names in datatables
Fix heatmap issue if all values in the matrix are identical.
Merge branch 'master' into datatable
Merge pull request #193 from csoneson/quotes2 bugfix: deparse size factor names
Fixed bug in UI code.
Remove disclaimer from firststeps tour
Fixed SE sanitation test.
Removed reliance on tmp_se during SE sanitation.
Standardize firststeps tour
Revert "single & to avoid shortcut" This reverts commit 469e857d911d178bcb1b66bcd768f1a7144c62bc.
funky error: sanitize_SE_input fails to add the last nested column
single & to avoid shortcut
extra test dedicated to nested DataFrames
Merge branch 'master' into quotes2
cover unit tests that involved nested colData
Merge pull request #195 from csoneson/furtherfix License fix and assorted sundries
Updated LICENSE for more sobriety.
clear spikes to test empty rowData
more honest test name
add spike and size factors
update doc
preserve column names in datatables
bugfix: deparse size factor names
Merge branch 'master' into furtherfix
Merge branch 'furtherfix' of into furtherfix
Deleted unused variable.
Merge pull request #191 from csoneson/furtherfix Additional miscellaneous fixes
Merge branch 'master' into furtherfix
Merge pull request #190 from csoneson/categorical Clarify instances of 'discrete' that mean 'categorical'
's'
Merge branch 'master' into categorical
Merge remote-tracking branch 'origin/categorical' into furtherfix
Switched to numericInput for reddim dimension choice.
Fixed bug, added test for code reporting for initialPanels.
Merge pull request #189 from csoneson/customplot Allow user-specified functions for dynamic coordinate generation
Fleshed out custom plot example.
Added tests for custom plot generation.
Merge branch 'customplot' into categorical
4 space indent; discrete -> categorical documentation
Fixed iSEE documentation.
Fixed tests for the existence of custom column plots.
Merge branch 'master' into customplot
Merge pull request #188 from csoneson/renamed Restrict generation of certain variables in code tracker
Clear brushes upon reselect in a custom col plot.
Ensure .make_customColPlot always returns, even with '---' function. This is necessary to fill all_coordinates for receiving plots.
Minor comment changes, avoid unnecessary newline in commands.
Reorganized code blocks for easier reading, renamed cached variable.
Ensure custom plot functions work with the code tracker.
Added an ugly colour for the custom col plots.
Rebuilt NAMESPACE, docs.
Bug fixes to custom plot function, documented custom fun requirements.
Try to initialize custom col plot, observe function changes.
Cleaned up iSEE doc page.
Added function for generating custom column plots.
Integrated customColPlot into observers, select/table link setup.
Broke up large plotting functions for general use. Enforced character vector returns for commands.
Set up basic infrastructure for custom column plots.
Merge branch 'master' into renamed
Merge pull request #187 from csoneson/kevinrue-release-badge Release badge
Various fixes to pass CHECK, edit comment.
Simplified handling of plot.data.all in horizontal violins.
More selective generation of plot.data.all and plot.data.pre. Adapted functions to use *.all or *.pre only when available. Moved geom_blank out of create_points, into individual plotfuns. Minor fixes to internal docs formatting.
Release badge
Merge pull request #186 from csoneson/renamed Renamed feature expression to feature assay, for generality.
rename completed in the tour section
Updated version number, date, NEWS.
Merge branch 'master' into renamed
Changed 'Feature expression' to 'Feature assay' plot name. Changed 'featExpr' to 'featAssay' in variables, arguments.
Merge pull request #185 from csoneson/bump Bumpy bump
Merge remote-tracking branch 'upstream/master' into bump
Merge pull request #184 from csoneson/RELEASE_3_7 bump x.y.z versions to even y prior to creation of RELEASE_3_7 branch
Merge branch 'master' into RELEASE_3_7
Merge pull request #183 from csoneson/more_devel Fixed downsampling for violin plots.
Fixed downsampling for violin plots.
Add check for set.seed in R/*.R files - Also generalized the wrapper around codetools findGlobals
Update class() == check to Warning instead of Note
Add check for global option in vignette closes 22 - catch if global option for vignettes using knitr is eval=FALSE
Add check for system vs system2
Add check for class()== / class()!= - use is() over class()== - use !is() over class()!=
add smooth function
Updated NEWs file and pushed package version to 1.11.1
Bumping to version 1.1.1
Merge remote-tracking branch 'upstream/master' Merge para traer los primeros cambios desde Bioconductor
Add top_pathways & simplify get_pathways_summary
color scheme
(1) changed screen output in flattenGTF (not relevant to Rsubread, but anyways) (2) fixed a bug in featureCounts
Merge branch 'trunk' into devel
bump
import transform_gate from ggcyto
Merge branch 'trunk' into devel
bump
transform data and gates properly before compute_stats for in-line scale layer. #33
add transform methods for gates. #33
bump
support flowCore::quadGate. #33
Version bump
Bug fixes for smoothing lines Adding constrained smoothing so that smoothed lines can't extend beyond the range of (0,1) by smoothing logit-transformed proportions and plotting inverse-logit predictions. Also adding an adaptive smoothing span parameter that uses a larger span for regions with fewer points plotted (ranges from 1 to 0.75 for regions with up to 40 points (after that, span is fixed at 0.75)). In addition, adding a check that at least two points exist before trying to plot a line.
Add check for extreme beta values (unstable fit) Also ensure that meanDiff calculation is in the same direction as beta values.
Check for replicates within candidate regions This commit adds a check to ensure that candidate regions have replicates (with coverage at least 1) in each group for at least two CpGs within the region. This is only in effect for factor comparisons.
Changed to use colored plotting symbols in makeImages, similar to how the lattice plots looked.
version bump v1.9.1
fix for changes in httpuv package (>= 1.4.0)
start of vignette demonstrating interoperability
import some missing methods to quell check() NOTES
add coercion method from SingleCellExperiment
Update test for T/F usage - complaint that $T or [T,] usage was being treated as TRUE/FALSE. - knitr moved to imports instead of suggest as need purl to check Rmd - namespace update for codetools::findGlobals, knitr::purl - move T/F check to best coding practices - fix indentation of messages that indicate affected files - separate sapply, 1:n and T/F check into individual functions for unit testing - checkLogicalUseFiles get all Rnw, Rmd, man, and R (not in R/*) scripts for testing - findLogicalFile searches a file for T/F usage - safeFindGlobals to encapsulate findGlobals in tryCatch for assignment error cases - findLogicalRdir check R/*.R scripts for global These are handled separate for two reasons: 1. to output function rather than file name. 2. to avoid errors with class and global assignment - makeTempRFile function to encasulate Rnw, Rmd, and man code into a dummy function for use with codetools::findGlobals - clean whitespace - update tests
Test markMultipleEdges()
updated GRbaseCoverage
updated heatmapPlot
Updated COPYRIGHT notice
Added 'flatten' method for reducing countData object to a data.frame.
doc fixes
add endomorphic tile/window method, fix doc typos, version bump
endomorphic tile/slide methods
safer n and export chop_by methods
version bump
add in get_genome_info methods for files
update to docs
merge operators class
improvements to BAM reading
rlang has deprecated overscopes, replace with new_data_mask, testthat can't find the BiocGenerics versions of mean so causing errors in test - need to look into this more.
wayward print message
prelim optimisation of group_by for summarise and filter
remove white space
chop by works like grglist
add chop_by functions
fix tests
merge in master
better bam reding
experiments with operators
Small tweak
Remove no more needed "bindROWS" method for matrix objects
Add alias
Fix bindROWS() on 1D arrays
concatenateObjects() was renamed bindROWS() in S4Vectors 0.19.4
concatenateObjects() was renamed bindROWS() in S4Vectors 0.19.4
concatenateObjects() was renamed bindROWS() in S4Vectors 0.19.4
concatenateObjects() was renamed bindROWS() in S4Vectors 0.19.4
move examples to examples field
minor changes to resolve notes
import SummarizedExperiment functions
Merge pull request #18 from maddahi93/master Add colFinder and update doc with roxygen2
Update documentation
Update doc to roxygen2
add colFinder function
concatenateObjects() was renamed bindROWS() in S4Vectors 0.19.4
Rename concatenateObjects() -> bindROWS()
Make footnotes work with Pandoc 2.0 (fixes and closes)
Fixed conflickts in DESC
itemized NEWS
bumped
Warning remove
Adding dir.out option
Merge branch 'master' of
improving heatmap gene plot
Adding missing file
Improving documentation
Updates in doc
Version bump
updated README
metadata fixes for new devel version
fixes and added subset to pathSEA
removed CondSEA merging method from tests
exported and documented createMergedRepository, BiocCheck clean
Final raw mode - BiocCheck clean CondSEA now uses the new function .getPEPRanks to compute HDF5-cache-enabled ranks from HDF5. Added createMergedRepository Fixed collection names. Special characters are lost in file raw-mode file names, but now they are recovered using gene set collection names. as.CategorizedCollection not needed anymore for example pathways. Added missing documentation
This version includes an experimental merging method with caching that is probably going to be replaced with a less automatic method
added organism option for importMSigDB.xml, added minsize and maxsize for computing KS statistic, changed rawmode datasets, added PEPs ranking function as a parameter, minor bug fixes
Merge branch 'untested' of github.com:franapoli/gep2pep into untested
restoring rownames and colnames as chunk size solves the problem of huge file size
added .loadPEPs for raw mode
changed _ to # in raw mode file names
raw mode: temp file directly into repository
added slection of collections in raw mode import
importFromRawmode working
added hdf5 composition, fixed version number to comply with bioconductor
updated version number
added raw mode
couple minor changes to vignette
Fixed conflicts in version change
add fix from nalcala
Fix DF[IRanges(...), ] on a DataFrame with data.frame columns This is fixed by adding an extractROWS,data.frame,RangesNSBS method. Also move extractROWS,array,RangesNSBS method from the S4Vectors package.
Fix window() on a DataFrame with data.frame columns This is fixed by adding an extractROWS,data.frame,RangeNSBS method. Also move extractROWS,array,RangesNSBS method to the IRanges package.
update NEWS file
update NEWS file
remove unspoortedsystem win in .BBSoptions and bump the version
Export deprecated functions
added GitHub URL to DESCRIPTION
added README.md and .gitignore
bug fix: subsampling, filtering bug fix
Merge remote-tracking branch 'origin/master'
Update denovoDeletions.R
Update denovoDeletions.R
Update denovoDeletions.R
added README and .gitignore
updated DESCRIPTION (add GitHub URL)
updated README
corrected README
added README
added .gitignore
updated DESCRIPTION
added README
update for NEWS
update for NEWS
update for NEWS
closes #11, fix 'getter' example in 'Extensions' vignette
version bump
Fix 'getter' example in 'Extensions' vignette
bug fixed: put.attr.gdsn() fails to update the existing attribute
bug fix: error with betareg when less than 3 samples are used
bug fix: error with betareg when less than 3 samples are used
release bug fix
Fix version in master
update rbuild ignore
fix merge
update README description
add update on 2.0
version bump
bug fix: #21
Provide download link for human reference genome.
version bump
use apply with margin = 2, closes #17
Updated unit tests for change in ExpressionSet behavior.
Fix errors caused by auto-formatting, version to 0.99.0
Code re-formatting.
Minor updates to DESCRIPTION, NEWS, vignette.
Improvements to vignette, fix outlierFinder.Rd example.
add entries to vignette
shorten examples in phenoDist
Remove some apparently spurious aliases from summary-methods.R: -\alias{summary,ANY-method} -\alias{summary,diagonalMatrix-method} -\alias{summary,sparseMatrix-method}
Add example for phenoDist().
either method works, use bpparam consistent with docs
add examples and edit bpparam
add examples, fix docs, organize DESC
update upstream with origin merge changes
Merge branch 'master' of github.com:LiNk-NY/doppelgangR
Add example for vectorHammingDistance.R, add Marcel as package author.
add examples to docs
Remove duplicates from TCGA ovc study. Make AUC-annotated suitability table.
Syntax fixes, uncomment loading of tcga.res.
ROC plots for microarray-rnaseq matching, use primary samples only.
Save esets with primary tumors only.
log(x+1) for RNA-seq data.
modified: TCGA_microarray_RNAseq.R - turn off all but correlation doppelgangers
update pkg to pass checks
Microarray / RNA-seq paired datasets from TCGA.
add changes
minor edit to vignette
Merge with upstream master
Rename class file
Minor vignette updates (ROCR::prediction) and fixes to .Rbuildignore.
New Supplemental Figure 1.
TCGA.R: fix to work with updated cancer table, don't eliminate emargoed (last embargo ends 12/18/2015).
Merge branch 'master' of github.com:lwaldron/doppelgangR
Case study clean-up.
Merge branch 'master' of
added two more cell-lines
Performance on small ovc datasets.
Fixed "doppelgangR(esets[[1]]) : Intermediate pruning off but no addCols shortcut available. error."
fix merge
fix merge
esets -> eset.pair in phenoFinder.R (fixes bug)
Remove browser() line...
New function to check whether pair of esets are "close enough".
merge
Merge branch 'master' of
Analysis of GSE44104, get rid of _1.0 in filenames for *.R.
Leaving only confirmed duplicates.
refactor corfinder: moved combat into private function
smoking gun-finder no longer calls NAs matches.
File of confirmed doppelgangers.
Merge branch 'master' of github.com:lwaldron/doppelgangR
Spearman's correlation examples to vignette.
added NCI60/CCLE analysis
Unit test for missing one or both pData.
Bugfix so now runs when pData missing for both ExpressionSets.
Test all databases for smoking guns only, convert vignette to Rmd, add unit test for checking smoking guns only.
added biocViews, NEWS, renamed vignette to vignettes.
Fix documentation warnings for plot-method.
Fixed imports, ExpressionSet method. All unit tests passing. V. 0.11.
Put three ROC plots panels into one plot for RNA-seq vs microarray in ovarian.
HR vs. % duplication example.
Added .Rbuildignore.
Forgot to bump version number.
Fixed plot method so density line is scaled correctly.
Now look at samples instead of sample pairs for ROC plots in four cancer types. AUC goes from 0.98 to 0.97.
Merge branch 'master' of github.com:lwaldron/doppelgangR
Fixed doppelmelt in vignette.
removed sweave syntax
Added missing library(ggplot2).
Switch to Marcel's fork of RTCGAToolbox to use his extract function, and get rnaseq / rnaseq2 in every case (whichever has more samples).
used quantile, not median to sort
violin plot
Analyses for paper - TCGA table updates, ROC plots for 4 cancer types.
Add summary table of all samples.
Added suitability.table to TCGA.R.
Working analysis of within-dataset expression correlations for all TCGA data types.
Merge branch 'master' of github.com:lwaldron/doppelgangR
Oops, re-add inst/*.R.
Merge branch 'master' of github.com:lwaldron/doppelgangR Conflicts: inst/TCGA.R
Extractor function for RTCGAToolbox data.
Saving bonf.prob=1.0
Catch errors in TCGA.R
Merge branch 'master' of github.com:lwaldron/doppelgangR
Threshold at bonf.prob=1.0.
Doppelganger summarizer script (countdops.R) and TCGA downloader (TCGA.R).
Save esets lists.
Fix error in CRC caused by infinite expression values.
Slight correction to unit test.
More unit tests.
Add esets[c(i, j)] to digest arguments for caching, since now these are added after digest call, which had broken caching.
Passes R CMD check with only documentation and IMPORTS notes/warnings! Still need to document BiocParallel and cache.dir.
Housekeeping checkin. Added support for BiocParallel, unit tests, lots of code cleaning.
Fixed cases with one smoking gun or one row of output, return untransformed Pearson correlations so plots now show red lines again.
Scripts for running on CRC, bladder, breast and ovarian, with link to results for all but breast in README.md.
Fix summary() method and put this in vignette.
Overdue version bump to 0.9.0, point to print() method in show() method fixes one of the issues.
doppelgangR is building. Fix to CRC.R and addition of RUNME.sh to run CRC, breast and bladder examples.
Fixed corner case in corFinder when a feature has non-finite values (by removing that feature for ComBat). Added scripts to run doppelgangR on CRC, bladder, and ovarian.
removed one more deprecated reference to eset.pair
fixed na.output, got rid of eset.pair object so big lapply loop now seems to work?
Attempt to fix case when a pair of esets gets skipped for phenotype or smokinggun checking. Not sure if it works.
Merge branch 'master' of github.com:lwaldron/doppelgangR
outer2df.R: stop unless x is a matrix or x and y are both vectors.
Merge branch 'master' of
Increased verbosity. Code clean-up in outlierFinder.R.
runinline.sh: remove alias automatically
Overhaul of smoking gun finder so it fits the same API. Fixed plot.pair option to plot.
Change outlierFinder to use skew-t distribution instead of normal distribution.
housekeeping checkin: added skew-normal functions, but not yet used.
Fixed ?doppelgangR and plot method for doppelgang-class
added phenoDist function to phenoFinder.args
fixes in weighted dist
digest package
Flatten nested output@fullresults structure, add caching for phenotypes, update documentation.
Source Code & Build Reports »
Source code is stored in Git.
Software packages are built and checked nightly. Build reports:
Development Version »
Bioconductor packages under development:
Developer Resources:
|
https://bioconductor.org/developers/gitlog/
|
CC-MAIN-2018-22
|
refinedweb
| 4,032
| 50.84
|
Eclipse Dali. This article describes how to use Eclipse Dali for JPA mapping. This article is based on Eclipse 3.5 (Eclipse Galileo).
1. Overview
This article describes Eclipse Dali and does not give a general introduction into JPA. Please see Java Persistence API (JPA) with EclipseLink - Tutorial for an introduction.
To use Eclipse Dali you also need Eclipse DTP which is described in Eclipse DTP Tutorial
To use Eclipse Dali you also need Derby which is described in
or independently of Eclipse in Apache Derby.
2. Installation
Use the update manager to install from "Web, XML, and Java Development" the "Dali Java Persistence Tools" and "Dali Java Persistence Tools - EclipseLink Support (Optional)". Install also "EclipseLink JPA" from "EclipseRT Target Platform Components".
Please see Using the Eclipse Update Manager.
3. Using Dali
3.1. Project
Create a new project "de.vogella.dali.first" via.
Click twice next.
The JPA perspective should now be opened.
Create a package "de.vogella.dali.first".
Selectand create the following class.
package de.vogella.dali.first; import java.io.Serializable; import javax.persistence.Entity; import javax.persistence.Id; /** * Entity implementation class for Entity: Person * */ @Entity public class Person implements Serializable { @Id private int id; private String firstName; private String lastName; private static final long serialVersionUID = 1L; public Person() {; } }
Annotate now the class with @Entity (before the class name". This will activate the views "JPA Structure" and "JPA Details".
You can now use the right mouse button in the "JPA Structure" view to map your elements.
Now you can use the "JPA Details" view to define for example how the primatry keys should get defined, e.g., via a sequence "SEQUENCE".
4. About this website
5. Links and Literature
Nothing listed. ===.
|
http://www.vogella.com/tutorials/EclipseDali/article.html
|
CC-MAIN-2017-09
|
refinedweb
| 285
| 60.61
|
On Fri, Mar 21, 2003 at 01:53:25PM +0000, Des Small wrote: > I wouldn't use it here though, I'd use it for things like: > > def accumulator(val=0): > def inner_acc(amount): > lexical val > val = val + amount # I don't like +=, so there. > return val > return inner_acc(val) > > Since I don't expect the Python Priesthood (Bot-hood?) to be pleased > about this, I would want to market it as a less harmful version of > 'global'. It does do much the same thing, after all, and has much > the same conventions. You could also use it for a getter/setter factory: def make_getter_setter(initial_value=None): val = initial_value def get(): return val def set(new_value): lexical val val = new_value return (get, set) However, in Python it's more natural to write these things as classes. For instance: class Accumulator: def __init__(self, val=0): self.val = val def __call__(self, amount): self.val = self.val + amount return self.val # Using lambda to prove I don't hate lisp accumulator = lambda val=0: Accumulator(val).__call__ I suspect that the closure-based version would be slightly more efficient if it were writable, since the natural way to use Accumulator would create a bound method at each call, and the references to 'val' in __call__ are all namespace operations. I *think* that the cell operations are simple C array indexing when executed, and if that's true, I don't see why the cell-setting instruction wouldn't be. Jeff
|
https://mail.python.org/pipermail/python-list/2003-March/199632.html
|
CC-MAIN-2014-15
|
refinedweb
| 249
| 59.94
|
How to apply optical flow to initial image?
I found old posts online as well as a code sample in the master branch of opencv here...
I will copy/paste the code for future reference:
def warp_flow(img, flow): h, w = flow.shape[:2] flow = -flow flow[:,:,0] += np.arange(w) flow[:,:,1] += np.arange(h)[:,np.newaxis] res = cv.remap(img, flow, None, cv.INTER_LINEAR) return res
Say I have two grayscale images
im1 and
im2 with values between 0 and 1. I computed the flow with:
flow = cv2.calcOpticalFlowFarneback(im1g, im2g, None, 0.5, 3, 15, 3, 5, 1.2, 0)
I understood that the function
warp_flow expects the original images with values in [0,255] and the computed flow computed on grayscale as input, so I called it with:
prediction = warp_flow(origim1, flow)
However, when I write the truth
origim2 and the
prediction to disk with
cv2.imwrite, it turns out that the prediction is pretty much like
origim1. So the flow is not modifying the input image.
I appreciate if you can explain what is happening or if you can provide a working example that I could try with my images. Is the
warp_flow function up-to-date?
Thank you in advance,
Hi @berak, let me formulate the problem then. Given two images
im1and
im2, I want to compute the velocity field
vxand
vythat transforms
im1to
im2. Given this velocity field and
im1, I would like to reconstruct
im2. In a practical scenario, I will be applying the transformation multiple times with synthetic velocities to produce synthetic images, and finally a video.
|
https://answers.opencv.org/question/186403/how-to-apply-optical-flow-to-initial-image/
|
CC-MAIN-2020-29
|
refinedweb
| 266
| 67.65
|
Type: Posts; User: benbridle38
Hi,
I am trying to 10 different objects in the maximum different combinations, so 10! combinations, and be able to use each combination in my code. I hear you are supposed to used a factoradic...
oky, really really stuck now,
this is my code:
#include <iostream.h>
#include <conio.h>
#include <math.h>
#include <stdlib.h>
int main()
can't, lol dont have the program,
will try it tommorrow when i can get to the computer labs.
Thanks for your help bud
yea sorry typo
* Y=pow(sin(pow(acos(x), 2/N)), 2/N);
there u go, does this look like it might work though now?
i had a go using the pow notation,
Y = pow(sin(pow(acosx,2/N),2/N);
does that look right at all?
Hi,
dont really know much about programming or anything like this.
But basicly in my program i am trying to write the following formula:...
|
http://forums.codeguru.com/search.php?s=3eb23476fe73e9852958e695d9590336&searchid=7209929
|
CC-MAIN-2015-27
|
refinedweb
| 158
| 76.62
|
We build a number of our Contentful sites with Gatsby, and as a result most of our sites have a build time (30s to 4m); we’ve implemented some work arounds that allow clients to preview content in the production build. This however has a wait time, as originally pointed out by Work and Co. We opted for another route that would allow for instant previews, and not require the client to publish articles before they were ready.
This was even more important for Supercluster. We built them a very modular article templates that allowed for a diverse editorial experience. We also created an image module that could be rendered in various unique ways on the screen. This required the client to essentially build/test the article and make sure images/text never overlapped depending on the content modules. We also used the Contentful Rich Text editor to inline modules and make the editing experience even smoother.
Enter Next.js
We needed a quick way to replicate the current production code in a preview environment. Since we were using Gatsby the frontend language was built in React. As a result it was a sort of no brainer to pick Next.js as the framework for rendering our dynamic previews.
Gotchas —
Next.js doesn’t do shared components from other projects very well
We’re using PostCSS in our Gatsby build so importing and compiling that in Next isn’t something supported out of the box
Because we’re copy and pasting the Gatsby build into Next.js we’ll also need to install all the dependencies in the Gatsby app (at least for the template we’re previewing)
We’ll want to run a server so we can serve a robots.txt disallow
Gatsby { Link } is trouble… more on that later
Fixing the shared components… I spent the better part of ~5 hours troubleshooting this, and eventually caved on a simpler solution that wouldn’t require things like symlinking/transpiling external modules with complex configurations. The solution? Copy your Gatsby directory into your next app before you build it so you can always access the most up-to-date shared components from the parent project.
Getting our styles out of Gatsby also proved relatively difficult. Getting PostCSS to work in Next.js is its own can of worms. As a result I opted to handle all the PostCSS work in the 'package.json' file. Depending on the architecture of your project you may not run into this issue. I’ll share my 'package.json' scripts for this project below regardless:
"scripts": { "dev": "next", "copy": "npm run build:css; rm -rf gatsby/; cp -R ../src gatsby", "copy-fonts": "cp -R ../static static", "build": "next build", "build:css": "postcss gatsby/styles/main.css -o static/main.css", "start": "next start" },
Keep in mind, while developing you could run a prestart script as well, but you’ll not want that in the now production because it will run into trouble finding the parent directory!
Now we can finally build our preview page component. I’m going to show you the whole component and then explain what’s going on:
import React from 'react' import Head from 'next/head' import contentfulAPI from '../api/contentful' import Article from '../gatsby/templates/article.js' export default class extends React.Component { static async getInitialProps ({ query }) { const response = await contentfulAPI.getEntries({ content_type: 'article', 'sys.id': query.id, include: 8 }) return { article: response.items[0] } } render () { const { title, tags } = this.props.article.fields const context = { data: this.props.article.fields, title: title, tags: tags } return ( <div> <Head> <link href='/static/main.css' rel='stylesheet' /> </Head> <Article pageContext={context} /> </div> ) } }
Those of you familiar with Next.js shouldn’t see anything that unusual here. We’re importing our article from our copied Gatsby template file. We’re also including the compiled CSS from our build task above in the header.
The only real work going on here is the query to get the article based on the id. We’re not including things like layout/header/footer modules as we don’t need them for the preview.
To get this hooked up you have 2 options, the first being ngrok for local development, and the second being a now deployment. Once you have an endpoint set up we’ll want to go into Contentful. 'Settings->Content Preview' from there you can specify the content type you want to be able to preview, in this case 'Articles' and set the url you’ll want to handle the preview.
e.g.{entry.sys.id}
Once that’s hooked up you should now be able to preview articles from Contentful to your local/deployed environment.
Additional Gotchas — Data Structure
There’s always something else right? So in my case I actually don’t use the gatsby-contentful-source plugin. I ran into a lot of problems with modular nested content and it would just continuously cause build fails depending on the initial article that was queried to build the schema. So I rolled my own source plugin for Contentful. This allowed me to fetch the Contentful data on my own, and as a result I simply handled the data in a large JSON response. This actually benefited me in the long run because the above app has the same data structure. If you end up using the default source plugin you will not be able to simply reference the article.js template like I have. This is because the GraphQL data structure is so different from the Contentful Preview API response.
Robots.txt
Because this is a preview server, we’ll want to add a 'robots.txt' file, so the crawlers don’t ever index this content unnecessarily. Since we’re using Next.js this is pretty easy. We’ll have to set up a 'server.js' file and populate it as follows:
const path = require('path') const express = require('express') const next = require('next') const port = parseInt(process.env.PORT, 10) || 3000 const dev = process.env.NODE_ENV !== 'production' const app = next({ dev }) const handle = app.getRequestHandler() const options = { root: path.join(__dirname, '/raw'), headers: { 'Content-Type': 'text/plain;charset=UTF-8' } } app.prepare().then(() => { const server = express() server.get('/robots.txt', (req, res) => ( res.status(200).sendFile('robots.txt', options) )) server.get('*', (req, res) => { return handle(req, res) }) server.listen(port, err => { if (err) throw err console.log(`> Ready on{port}`) }) })
You just need to create a robots.txt file in a directory within your app, in my case I made one in a folder called 'raw'.
import { Link } from ‘gatsby’
If you’re using this in any of your components that you are previewing you’ll encounter errors. Gatsby Link mounts a static query that Next.js isn’t a big fan of. So we’re gonna use replace to update all the instances of Link in our components after we copy them over…
const replace = require('replace') replace({ regex: "import { Link } from 'gatsby'", replacement: '', paths: ['./gatsby/'], recursive: true, silent: true }) replace({ regex: "<Link to=", replacement: '<a href=', paths: ['./gatsby/'], recursive: true, silent: false }) replace({ regex: "</Link>", replacement: '</a>', paths: ['./gatsby/'], recursive: true, silent: false }) replace({ regex: "<Link ", replacement: '<a ', paths: ['./gatsby/'], recursive: true, silent: false }) replace({ regex: "to=", replacement: 'href=', paths: ['./gatsby/'], recursive: true, silent: false })
It’s just one final gotcha. If you have any other variations feel free to add them, and then just add another script to package.json like 'node replace.js'.
Last Thoughts
The hardest part of this was importing the components and styles out of the Gatsby app into Next.js. I also tried to do this with react-create-app but that also doesn’t like importing from parent directories. If you have a larger organization you could potentially leverage npm link and host private repos for your shared components. I however didn’t want to add any additional levels of technical debt to this project, so I kept everything in the build process documented in the 'package.json'.
Also choosing Next.js meant that we could quickly deploy to services like now and create alias now domains without needing to set up additional subdomains for the client.
This article was originally posted on medium. Read it here.
@OnClick({ R.id.view_a, R.id.view_b, R.id.view_c }) public void onClickSomething(View v) { // Handle click event }
|
https://www.contentful.com/blog/2019/04/24/content-preview-for-your-contentful-gatsby-site-with-nextjs/
|
CC-MAIN-2021-21
|
refinedweb
| 1,392
| 57.87
|
Example In java
BorderLayout Example In java
...
BorderLayout in java awt package. The Border Layout is arranging and resizing... for using this program. This Java Application uses BorderLayout for setting
java BorderLayout
java BorderLayout How are the elements of a BorderLayout organized
Java Dialogs - Swing AWT
visit the following links:
java swing - Swing AWT
:
Thanks...java swing how to add image in JPanel in Swing? Hi Friend,
Try the following code:
import java.awt.*;
import java.awt.image.
Java - Swing AWT
Java Hi friend,read for more information,
query - Swing AWT
java swing awt thread query Hi, I am just looking for a simple example of Java Swing swings - Swing AWT
. swings I am doing a project for my company. I need a to show... write the code for bar charts using java swings. Hi friend,
I am
java - Swing AWT
information,
Thanks...java i want a program that accepts string from user in textfield1 and prints same string in textfield2 in awt hi,
import java.awt.
|
http://www.roseindia.net/tutorialhelp/allcomments/3863
|
CC-MAIN-2014-52
|
refinedweb
| 167
| 58.99
|
#include <genesis/tree/mass_tree/tree.hpp>
Inherits DefaultEdgeData.
Data class for MassTreeEdges. Stores the branch length and a list of masses with their positions along the edge.
See MassTree for more information.
Definition at line 148 of file tree/mass_tree/tree.hpp.
Polymorphically copy an instance of this class. Use instead of copy constructor.
Reimplemented from DefaultEdgeData.
Definition at line 182 of file tree/mass_tree/tree.hpp.
Definition at line 172 of file tree/mass_tree/tree.hpp.
Polymorphically create a default-constructed instance of this class with the same derived type as it was called on.
Reimplemented from DefaultEdgeData.
Definition at line 177 of file tree/mass_tree/tree.hpp.
List of masses stored on this branch, sorted by their position on the branch.
This data member maps from a position on the branch to the mass at that position. In order to be valid, the positions have to be in the interval [0.0, branch_length]. See mass_tree_validate() for a validation function.
Definition at line 199 of file tree/mass_tree/tree.hpp.
|
http://doc.genesis-lib.org/classgenesis_1_1tree_1_1_mass_tree_edge_data.html
|
CC-MAIN-2018-17
|
refinedweb
| 171
| 54.59
|
program requests and reads a list of file names (one per line). Program ends when a blank line is entered.
If a file name endsWith ".txt" then print "Text File: xxxx" where xxxx is the full file name. If the file name contains the string "del" print "Delete: xxxx". If the file satisfies both conditions just do the text file option. Otherwise do not print anything.
So far I have done this but I can't figure out how to do the part where you output Text file: if it contains both .txt and del. Could anyone help me out?
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package assignment2;
import java.util.Scanner;
/**
*
* @author Adam
*/
public class FileType {
public static void main(String[] args) {
String fileNames = "";
Scanner scan = new Scanner(System.in);
System.out.println("Enter file names (blank line to end):");
fileNames = scan.nextLine();
while (fileNames.contains(".txt")) {
System.out.println("Text File:" + fileNames);
fileNames = scan.nextLine();
while (fileNames.contains("del")) {
System.out.println("Delete:" + fileNames);
fileNames = scan.nextLine();
}
}
}
}
|
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/13794-what-do-i-need-do-printingthethread.html
|
CC-MAIN-2016-36
|
refinedweb
| 177
| 71.21
|
Introduction to Spring Boot Scheduler
As the name suggests scheduler is used to schedule a particular task or activity that we want to execute at a fixed time in a day or anytime; for this mechanism to implement in spring boot, we can use the scheduler. In spring boot, we can implement a scheduler very easily with the use of annotations; only no other type of configurations are required to make this work in spring boot. Inside the schedule, we can write the logic that we want to execute at speck time of the day, to mention the time we have to follow the standard given by the spring boot. Here we will see its internal working and how we can specify the time inside the scheduler in the spring boot application.
Syntax:
As we know that n spring boot, we have to configure or enable everything before we actually use it inside the application; this is the same things with the scheduler; also, let’s take a closer look at the syntax for the enabling scheduling and use it inside the program.
@Scheduled(your expression)
public void method_name() {
// logic goes here ..//
}
As you can see in the above syntax, we have used @Scheduled annotation over the method to use scheduling inside the program.
@Scheduled(1000)
public void test() {
// logic here ..//
}
How does Scheduler work in Spring Boot?
As of now, we already know that scheduler in spring boot or in general is used to schedule a task or activity to be executed at a fixed time to perform some logic in the application.
While using the scheduler in spring boot, we have two things to be considered while using it.
- The method that annotates with @schedul annotation should not accept any parameter inside it.
- The methods that are annotated with @schedul annotation should return a void type-in program.
These are two things which need to keep and mind while using it otherwise;, we will get the error.
In this section, we will first see the things required to set up for schedule and different ways to provide the time inside it.
1. In spring boot, we can schedule an activity by using the cron job; this is very flexible and easy to use. By the use of it, we can specify the different parameters in the expression. This expression allows us to initialize the day, month, minute, etc., when we want our task to run. This expression consists of five fields, which should be in the same order.
(minute) (hour) (day-of-month) (month)> (day-of-week) (command)
We can specify our value in the same order; if any of the fields are missing in the expression, it will throw a runtime exception.
Below see one sample syntax to initialize the cron using the expression, which will execute the task in every day at 12:00 PM.
Example:
0 12 * * ?
2. Second way we have is to pass any fixed value inside the scheduler, which will run that scheduler that this field time.
@Scheduled(fixedRate = 1000)
public void test() throws InterruptedException {
// logic goes here ..
}
In the above line of code, we had used fixedRate the attribute o Scheduled in spring boot, and assign it value as ‘1000’ here so this will invoke the task in very second when the server started. Also, there will be no delay between the start of the server and the scheduler first run; it will execute the first time immediately when the server is up. If we want to delay, we can use the after attribute of the scheduler for this.
3. With delay: To provide initial delay, we can use the initialDelay attribute of the Scheduled annotation, which will start running the task with this initial delay only.
Example:
@Scheduled(fixedDelay = 1000, initialDelay = 5000)
public void test() {
// logic will go here ..//
}
4. We have one more attribute that can explain the delay between the current and the next execution that will happen. For this, we can use ‘delay’ inside the Scheduled annotation.
5. Usage of cron: Below, see the syntax for using crone in @Scheduled annotation.
Example:
@Scheduled(cron="* * * * *")
public void test() {
// logic here .. //
}
Steps need to follow to implement scheduler in spring boot application which is as follows:
1. First, we will develop the application from scratch using the spring initializer, all the necessary details there to make it run.
URL:
2. We have to enable scheduling in the main application; without this, it will not work, and the task will not execute. To enable the scheduling, we used one annotation as @EnableScheduling, which needs to be used in the main class.
Example:
@SpringBootApplication
@EnableScheduling
public class SchedularApplication {
public static void main(String[] args) {
SpringApplication.run(SchedularApplication.class, args);
}
}
3. Now, we are ready to create the method which will execute a task at a specific time, create a separate class and create one method inside it.
Example:
@Component
public class SchedulerDemo {
@Scheduled(cron = "0 * 2 * * ?")
public void execute() {
// some operation will go here ..//
// like db update etc
}
}
We have used cron expression here, but you can use anything as per the requirement.
Conclusion
By using scheduling in an application, we can execute the task independently and update or create thousands of records in the database, which will not impact the user experience with the application. If you want to automatically trigger a method that will execute some logic, you can go for the scheduler in spring boot, easy to use and handle.
Recommended Articles
This is a guide to Spring Boot Scheduler. Here we discuss the introduction and how the scheduler works in spring boot? Respectively. You may also have a look at the following articles to learn more –
|
https://www.educba.com/spring-boot-scheduler/?source=leftnav
|
CC-MAIN-2022-40
|
refinedweb
| 949
| 60.65
|
Difference between IQueryable<T> vs IEnumerable<T>
A lots of time Confusion between IQueryable and IEnumerable interface because, they are look like same and when we start writing a code often, we will choose a wrong approach between them. From behind this reason was I am unable to know perfectly about these two. Here, we will try to explain a differences.
IQueryable interface:
IQueryable exists in System.Linq Namespace. It only forward only over a collection. This is a better query data from out-memory (like remote database, service) collections. When we access a data from a database IQueryable execute select query on server side with all filters. It support Lazy Loading that was better for paging.
IQueryable<T> is the interface that allows LINQ-to-SQL to work. If we want to refine another query it will executes, if necessary.
In code:
IQueryable<Student>
stud = ………….
var studentGrade = db.Students.stud .Where(x =>x.grade<5);
That code will execute SQL to only select Students which grade is <5. And we have also filter all other students which grade above from 5.
IEnumerable interface:
Enumerable exists in System.Collections Namespace. It only forward only over a collection. IEnumerable is best to query data from in-memory collections. While we access data from database, IEnumerable interface execute select query on server side and load data in-memory on client side and then filtering data. IEnumerable is suitable for LINQ to Object queries. IEnumerable supports deferred execution. IEnumerable does not support custom query and also be a lazy loading.
IEnumerable<T> is the interface that allows LINQ-to-Object to work. It means that all objects matching the original query will have to be loaded into memory from the database.
IEnumerable<Student>
stud = ……….
var studentGrade = db.Students.stud .Where(x =>x.grade<5);
This is quite an important difference, and working on IQueryable<T> can in many cases save you from returning too many rows from the database. Another prime example is doing paging: If you use Take andSkip on IQueryable, you will only get the number of rows requested; doing that on an IEnumerable<T> will cause all of your rows to be loaded in memory.
Differences:
There are many
differences as below:
IEnumerable:
1.
IEnumerable is exists under the System.Collections namespace.
2. IEnumerable is suitable for querying data from in-memory collections.
3. While querying data from the database, IEnumerable executes "select query" on the server-side, loads data in-memory on the client-side and then filters the data.
4. IEnumerable is also be better work for LINQ to Object and LINQ to XML queries.
IQueryable:
1. IQueryable is exists under the System.Linq Namespace.
2. IQueryable is suitable for querying data from out-memory (like remote database, service) collections.
3. While querying data from a database, IQueryable executes a "select query" on server-side with all filters.
4. IQueryable is beneficial for LINQ to SQL queries.
I always seek to read your articles.
It is great to associate with such a blog.
|
https://www.mindstick.com/blog/11030/difference-between-iqueryable-t-vs-ienumerable-t
|
CC-MAIN-2017-47
|
refinedweb
| 505
| 58.58
|
Opened 8 years ago
Closed 8 years ago
#15645 closed (duplicate)
HTTP methods in urls.py
Description
It would be nice to have possibility to distinguish a HTTP method in urls.py.
IMHO it would be clearer and more extensible in future
for example:
urlpatterns = patterns('', url ('POST', r'/user/(?P<username>\d+)$', 'myapp.views.user.view1'), url ('GET', r'/user/(?P<username>\d+)$', 'myapp.views.user.view2'), url ('DELETE', r'/user/(?P<username>\d+)$', 'myapp.views.user.delete'), )
Change History (6)
comment:1 Changed 8 years ago by
comment:2 Changed 8 years ago by
it could be backward compatible.
HTTP methods could be extracted if only there is an url() function, not simple list ()
and url will check for its first param, if it is HTTP method GET, PUT, DELETE, POST, etc, then do action with regex and view
it not, do the old way
However I think it is not necessary complicated.
I suggest a http function:
urlpatterns = patterns('', http ('POST', r'/user/(?P<username>\d+)$', 'myapp.views.user.view1'), http ('GET', r'/user/(?P<username>\d+)$', 'myapp.views.user.view2'), http ('DELETE', r'/user/(?P<username>\d+)$', 'myapp.views.user.delete'), )
and url do the same as it is now
comment:3 Changed 8 years ago by
I know that this maybe encumber an urlpatterns but consider this:.ajax (r'/user/(?P<username>\d+)$', 'views.view1'), http.post (r'/user/(?P<username>\d+)$', 'views.view2'), http.get (r'/user/(?P<username>\d+)$', 'views.view2'), http.delete (r'/user/(?P<username>\d+)$', 'views.delete'), # an url function will do the old way url (r'^', include(admin.site.urls)), )
or equivalent ('POST', r'/user/(?P<username>\d+)$', 'views.view1'), http ('GET', r'/user/(?P<username>\d+)$', 'views.view2'), http ('DELETE', r'/user/(?P<username>\d+)$', 'views.delete'), # an url function will do the old way url (r'^', include(admin.site.urls)), )
therefore maybe a request not http module name is better to consider
IMHO:
first (http.ajax, http.get, http.post...) solution is the most clear and the best fit to the explicit python policy
it is simple and I think it would boost urls dispatcher performance
when it may easy and fast filter lot of regexp pattern, hence there is no need to lookup in POST list regexps when we have a GET request
maybe distinguishing of ajax methods isn't a bad idea too:
from django.conf.urls.defaults import ajax from django.conf.urls.defaults import patterns urlpatterns = patterns('myapp', ajax.post (r'/user/(?P<username>\d+)$', 'views.view2'), ajax.get (r'/user/(?P<username>\d+)$', 'views.view2'), ajax.delete (r'/user/(?P<username>\d+)$', 'views.delete'), )
It gives a little logic to urls.py but gives a performance, simplicity and readability in return
this in conclusion would give us:
from django.conf.urls.defaults import ajax from django.conf.urls.defaults import http from django.conf.urls.defaults import url from django.conf.urls.defaults import patterns from django.conf.urls.defaults import include from django.contrib import admin urlpatterns = patterns('myapp', ajax.post (r'/user/(?P<username>\d+)$', 'views.viewAjax1'), ajax.get (r'/user/(?P<username>\d+)$', 'views.viewAjax2'), ajax.delete (r'/user/(?P<username>\d+)$', 'views.delete'), ajax.all (r'/user/(?P<username>\d+)$', 'views.delete'), #all type of methods ajax (r'/user/(?P<username>\d+)$', 'views.delete'), #all type of methods ajax ('POST', r'/user/(?P<username>\d+)$', 'views.view2'), ajax ('GET', r'/user/(?P<username>\d+)$', 'views.view2'), ajax ('DELETE', r'/user/(?P<username>\d+)$', 'views.view2'), http.post (r'/user/(?P<username>\d+)$', 'views.view2'), http.get (r'/user/(?P<username>\d+)$', 'views.view2'), http.delete (r'/user/(?P<username>\d+)$', 'views.delete'), http (r'/user/(?P<username>\d+)$', 'views.delete'), #all type of methods http ('POST', r'/user/(?P<username>\d+)$', 'views.view2') http ('GET', r'/user/(?P<username>\d+)$', 'views.view2') http ('DELETE', r'/user/(?P<username>\d+)$', 'views.view2') # an url function will do the old way url (r'^', include(admin.site.urls)), )
comment:4 Changed 8 years ago by
Unless I'm misreading the code, method based dispatch already exists in one form, implemented in the new Class Based Views (see View in django.views.generic.base).
As such, implementing the same basic functionality elsewhere (even if its a better fit, which the urls configuration may well be) seems counter-intuitive, and against the oft harkened zen of there being only one true way.
comment:5 Changed 8 years ago by
Class Based Views View.dsipatch does not provide such clear and intuitive way of defining urls.
And require an application to run view to check if it able to handle a request.
IMHO this way gives you better scalability performance and easy of use.
Look at those sample code (I am thinking about last one), and you know already which view is when executed, without having to see the inner code of any classes and views.
When you look at somebody's else project you are not hoping to read views to see how one url is handled.
When request is POST it looks up in POST urls and quickly omit not fitting GET and DELETE methods.
My function views are full of those ifs:
if request.is_ajax: #do stuff else: if request.method == "POST": #do something else: #do else
Hence my true code is almost two indents inside those boilerplate if structure.
Which is always the same... it takes a lot of python beauty and readibility from code.
I know I am able to do
urlpatterns = patterns('myapp', url (r'/ajax/user/(?P<username>\d+)', 'views.myview'), url (r'/post/user/(?P<username>\d+)', 'views.myview'), url (r'/get/user/(?P<username>\d+)', 'views.myview'), url (r'/delete/user/(?P<username>\d+)', 'views.myview'), )
but what with reusability?
We have to change this to something like r'/user/post/...' and r'/user/get/...'
When first part of url determines app the second will have to be a method... and so on.
Every django setup has completely different approach to handle urls. Therefore I think that this standardization I mention in my second post will solve problem, at least will provide a reasonable and logic way do define urls.
Everywhere when I am looking for, people is talking about distinguishing methods inside view in nested ifs... I find it ugly, and necessary. IMHO urls dispatcher (the name dispatcher defines it!) should do this stuff.
Not some kind of inner class based views magic. I think that magic belongs to ruby and RoR, and explicit is the true pythonic way.
BTW. How this dispatch method got inside class based views?
Isn't it suppose to be in some kind of dispatcher in modules? IMHO its logically do not fit to the Class Based Views.
What do you think?
Why not, although the suggested API would not be backwards compatible. Also, if the method didn't match then a 404 would systematically be returned, which is quite restrained. Marking DDN for now.
|
https://code.djangoproject.com/ticket/15645
|
CC-MAIN-2019-04
|
refinedweb
| 1,166
| 67.55
|
getting a compile error on line 24.
#include<iostream> #include<string> #include<vector> #include <algorithm> using namespace std; //global constants const char X = 'X'; const char O = 'O'; const char EMPTY = ' '; const char TIE = 'T'; const char NO_ONE = 'N'; //function prototypes void instructions(); char askYesNo(string question); int askNumber(string question, int high, int low = 0); char humanPiece(); char opponent(char piece); void displayBoard(const vector<char>& board); char winner(const vector<char>& board); bool isLegal(const vector<char>& board, int move); int humanMove(const vector<char>& board, char human); int computerMove(vector<char> board, char computer); void announceWinner(char winner, char computer, char human); //main function int main(){ int move; const int NUM_SQUARES = 9; vector<char> board(NUM_SQUARES, EMPTY); instructions(); char human = humanPiece(); char computer = opponent(human); char turn = X; displayBoard(board); while (winner(board) == NO_ONE) { if (turn == human) { move = humanMove(board); board[move] = human; } else { move = computerMove(board, computer); board[move] = computer; } displayBoard(board); turn = opponent(turn); } announeWinner(winner(board), computer human); return 0; } //displays game instructions and gives the computer's opponent some attitude void instructions(){ cout << "Welcome to the extreme man v. machine throwdown: Tic-Tac-Toe.\nWhere human brain is pit against silicon processor\n\n"; cout << "Make your move by entering a number, 0-8. The number\ncorresponds to the desired board position, as illustrated:\n\n"; cout << " 0 | 1 | 2\n"; cout << " --------\n"; cout << " 3 | 4 | 5\n"; cout << " --------\n"; cout << " 6 | 7 | 8\n\n"; cout << "Prepare yourself, human. The battle is about to begin\n\n"; } //asks a yes or no question. keeps asking the question until the player enters either a y or an n. it recieves a question and returns either a 'y' or an 'n'. char askYesNo(string question){ char response; do { cout << question << " (y/n): "; cin >> response; } while (response != 'y' && response != 'n'); return response; } //asks for a number within a range and keeps asking until the player enters a valid number. it recievesd a question, a high number, and a low number. it returns a number within the range specified int askNumber(string question, int high, int low){ int number; do { cout << question << " (" << low << " - " << high << "): "; cin >> number; } while (number > high || number < low); return number; } //asks player if he wants to go first, and returns the humans piece based on that choice . . . Tic-Tac-Toe tradition, X goes first char humanPiece() { char go_first = askYesNo("Do you require the first move?"); if (go_furst == 'y') { cout << "\nThen take the first move. You will need it.\n"; return X; } else { cout << "\nYour bravery will be your undoing ... I will go first.\n" return O; } //gets a piece ('X' or 'O') and returns the opponents piece ('X' or 'O') char opponent(char piece) { if (piece == X) return O; else return X; } void displayBoard(const vector<char>& board) { cout << "\n\t" << board[0] << " | " << board[1] << " | " << board[2]; cout << "\n\t" << "-"; cout << "\n\t" << board[3] << " | " << board[4] << " | " << board[5]; cout << "\n\t" << "-"; cout << "\n\t" << board[6] << " | " << board[7] << " | " << board[8]; cout << "\n\n"; } char winner(const vector<char>& board) { // all possible winning rows const int WINNING_ROWS[8][3] = { {0, 1, 2}, {3, 4, 5}, {6, 7, 8}, {0, 3, 6}, {1, 4, 7}, {2, 5, 8}, {0, 4, 8}, {2, 4, 6} }; const int TOTAL_ROWS = 8; // if any winning row has three values that are the same (and not EMPTY), // then we have a winner for(int row = 0; row < TOTAL_ROWS; ++row) { if ( (board[WINNING_ROWS[row][0]] != EMPTY) && (board[WINNING_ROWS[row][0]] == board[WINNING_ROWS[row][1]]) && (board[WINNING_ROWS[row][1]] == board[WINNING_ROWS[row][2]]) ) { return board[WINNING_ROWS[row][0]]; } } // since nobody has won, check for a tie (no empty squares left) if (count(board.begin(), board.end(), EMPTY) == 0) return TIE; // since nobody has won and it isnt a tie, the game aint over return NO_ONE; } inline bool isLegal(int move, const vector<char>& board) { return (board[move] == EMPTY); } int humanMove(const vector<char>& board, char human) { int move = askNumber(Where will you move?, (board.size() - 1)); while (!isLegal(move, board)) { cout << "\nThat square is already occupied, foolish human.\n"; move = askNumber("Where will you move?", (board.size() - 1)); } cout << "Fine...\n"; return move; } { cout << "I shall take square number: "; // if computer can win on next move, make that move for(int move = 0; move < board.size(); ++move) { if (isLegal(move, board)) { board[move] = computer; if (winner(board) == computer) { cout << move << endl; return move; } // done checking this move, undo it board[move] = EMPTY; } } // if human can win on next move, block that move char human = opponent(computer); for(int move = 0; move < board.size(); ++move) { if (isLegal(move, board)) { board[move] = human; if (winner(board) == human) { cout << move << endl; return move; } // done checking this move, undo it board[move] = EMPTY; } } // the best moves to make, in order const int BEST_MOVES[] = {4, 0, 2, 6, 8, 1, 3, 5, 7}; // since no one can win on next move, pick best open square for(int i = 0; i < board.size(); ++i) { int move = BEST_MOVES[i]; if (isLegal(move, board)) { cout << move << endl; return move; } } } void announceWinner(char winner, char computer, char human) { if (winner == computer) { cout << winner << "s won!\n"; cout << "As I predicted, human, I am triumphant once more proof\n"; cout << "that computers are superior to humans in all regards.\n"; } else if (winner == human) { cout << winner << "s won!\n"; cout << "No, no! It cannot be! Somehow you tricked me, human.\n"; cout << "But never again! I, the computer, so swear it!\n"; } else { cout << "Its a tie.\n"; cout << "You were most lucky, human, and somehow managed to tie me.\n"; cout << "Celebrate, drink a beer... for this is the best you will ever achieve.\n"; } }
This post has been edited by RGLAsnakeMan: 05 May 2010 - 08:38 AM
|
http://www.dreamincode.net/forums/topic/172121-tic-tac-toe/
|
CC-MAIN-2016-36
|
refinedweb
| 958
| 59.13
|
Counting pixels in Blender
UPDATE 2019: the image of the node setup has been lost.
For my research, I’m using Blender to generate images. I wanted to know how visible a certain object is in the final render (i.e. how many pixels it occupies). For this there is the “object index” render pass (aka “IndexOB” in the compositor). I’ve been struggling with it, since it always outputs that the index is 0, even though there are multiple objects in the scene.
Well, with the help of mfoxdogg on the #blender IRC channel, we found a solution: You need to set the index by hand, for every object you’re interested in. If you go to the object properties (in the properties explorer), in the section “Relations” there is a slider “Pass Index”. This is set to 0 by default, and you can set it to any positive number you want. This is then reflected in the output of the “IndexOB” render pass.
So to solve my problem, I needed to count the pixels that a certain object
occupies in the image. For this I connected a viewer node to the “IndexOB” pass.
Once you’ve rendered, you can then access the pixel data using
bpy.data.images['Viewer Node'].pixels. It’s a one-dimensional array of floats,
in the RGBA format. This means that every pixel is represented as four
consecutive floats.
The IndexOB data is stored in the RGB channels, and the alpha channel is set to 1.0. You could write some code to only inspect the R channel for its value and count the ones that are equal to the object index you’re interested in. I chose a more lazy approach of counting all RGBA values that are equal to the index, then dividing by 3. For this you do need an object index larger than 1, as the alpha channel is always set to 1 too. So here is the code:
import bpy # I'm interested in the object named "L" ob_name = 'L' # Set the "pass index" bpy.data.objects[ob_name].pass_index = 42 # Render the image bpy.ops.render.render() # Count all pixels that have value = 42. We count all R, G and B components # separately, so divide by 3 to get the actual number of pixels pixel_count = sum(1 for p in bpy.data.images['Viewer Node'].pixels if p == 42) / 3 print('Pixel count for object %r : %i' % (ob_name, pixel_count))
|
https://stuvel.eu/post/2014-02-28-counting-pixels-in-blender/
|
CC-MAIN-2021-31
|
refinedweb
| 411
| 63.7
|
Bug Description
Binary package hint: inkscape
It happened in Lucid as well, now running Maverick, when apps using Java are running, like Netbeans, or Open Office. Now it seems to happen after Inkscape is running for a while (about half an hour). It was reported to be connected to copy/paste tools, but I have none running, to my knowledge.
ProblemType: Crash
DistroRelease: Ubuntu 10.10
Package: inkscape 0.48.0-1ubuntu1
ProcVersionSign
Uname: Linux 2.6.35-22-generic x86_64
NonfreeKernelMo
Architecture: amd64
Date: Thu Sep 23 12:16:05 2010
ExecutablePath: /usr/share/
InstallationMedia: Ubuntu-Studio 10.10 "Maverick Meerkat" - Beta amd64 (20100902.1)
InterpreterPath: /usr/bin/python2.6
ProcCmdline: /usr/bin/python gimp_xcf.py --guides=false --grid=false /tmp/ink_
ProcEnviron:
PATH=(custom, user)
LANG=nl_BE.utf8
SHELL=/bin/bash
PythonArgs: ['gimp_xcf.py', '--guides=false', '--grid=false', '/tmp/ink_
SourcePackage: inkscape
Title: gimp_xcf.py crashed with TypeError in effect()
UserGroups: adm admin audio cdrom dialout lpadmin plugdev sambashare
If not related to actually exporting to XCF, referring to bug #418242 in Inkscape: “Incorrect call to output plugins (with persistent error message) on copy” (triggered when copy&pasting objects or text in Inkscape while a clipboard monitoring application is running in the background):
<https:/
Earlier description in the comments to bug #494472 “Opened and closed outlook. On closing Python window appeared and on retuning to inkscape I received an error message”, a similar error - which has never been confirmed though and reported for the Windows port.
But bug #565296 would not explain «Now it seems to happen after Inkscape is running for a while (about half an hour).», i.e. what triggers the output extension to be called without actually exporting to XCF.
...
It happens completely at random, suddenly out of the blue, but always when
inkscape is not in focus, I mean, when working in a different application,
like netbeans, or while checking email, typing text in open office.
Suddenly the error warning pops up, followed by the export to image menu.
I can't replicate it now, because I've switched back to lucid, and it
hasn't occurred yet because I simply don't have work for inkscape right now.
I'm sorry I can't give more details, I hope it helps a bit though.
grtz,
Bart
Op schreef JazzyNico <email address hidden>:
> ...
> --
> gimp_xcf.py crashed with TypeError in effect()
> https:/
> You received this bug notification because you are a direct subscriber
> of the bug.
Netbeans might have something to do with this. I found that it triggers bug #418242 (traceback below)
Shall we mark as duplicate?
UniConvertor failed:
Cannot list directory /home/alex/
ignoring it in font_path
Cannot list directory /home/alex/
ignoring it in font_path
/usr/lib/
from popen2 import popen2
No plugin-type information in /usr/lib/
No plugin-type information in /usr/lib/
Cannot load plugin module sk1saver
Traceback (most recent call last):
File "/usr/lib/
desc)
File "/usr/lib/
from app.Graphics.image import CMYK_IMAGE
ImportError: cannot import name CMYK_IMAGE
When importing plugin sk1saver
Traceback (most recent call last):
File "/usr/lib/
module = self.load_module()
File "/usr/lib/
desc)
File "/usr/lib/
from app.Graphics.image import CMYK_IMAGE
ImportError: cannot import name CMYK_IMAGE
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/
saver(doc, output_file)
File "/usr/lib/
% {'name'
app.events.
@Alex - yes, either marking as duplicate or at least adding a comment to bug #418242 that netbeans triggers a similar bug - not when copying text in inkscape but when inkscape is not in focus (if you can confirm this).
Hi bartje,
Thanks for reporting this bug. Does this bug seem to just happen at random? Surely, it only happens if have you tried to export your work as an XCF file? Could this be a duplicate of bug #650890?
|
https://bugs.launchpad.net/ubuntu/+source/inkscape/+bug/645909
|
CC-MAIN-2018-43
|
refinedweb
| 636
| 54.42
|
are there any or am i really limited to using cin?
Printable View
are there any or am i really limited to using cin?
or cin.get();or cin.get();Code:
#include <stdlib.h>
...
.........
system("pause");
....
do a search
Instead of finding a function that works with your compiler you should get a compiler that works with all the functions you want to use. You would be supprised how many functions will not work with DevC++. There are many compilers that will work with a nice development environment. Now matter which free compiler you choose cEdit ( ) will work well and I think it is actually nicer than DevC++'s environment. My prefered free compiler is Borland ( ). There is always the more expensive but versitile Visual Studio by Microsoft.
i dont need it for a system pause
i need it to actually wait for a specific keyboard input before proceeding, and without using enter
getch doesn't exist with DevC++, but getche does:
Code:
#include <iostream>
#include <conio.h>
using namespace std;
int main()
{
cout<<"Hit a key: ";
int key = getche();
cout<<"\nYou hit "<< static_cast<char>(key) <<endl;
cin.get();
}
Theres been tons of topics on these boards about incorporating getch() into dev, just run a search theres lots of information.
|
http://cboard.cprogramming.com/cplusplus-programming/38473-getch-equivalent-dev-cplusplus-printable-thread.html
|
CC-MAIN-2014-42
|
refinedweb
| 210
| 63.39
|
Learn to Use ITensor
Priming Indices in ITensor
Thomas E. Baker—August 21, 2015
Each index of an ITensor is an object of type
Index. Each
Index represents some vector space over which the tensor is defined.
For example, a matrix product state (MPS) tensor has one physical
Index of type
Site. If the MPS is the wavefunction of a chain of S=1/2 spins, then a site index of a particular MPS tensor represents the physical "degrees of freedom" of that site. If the meaning of this site index ever changes (we give an example below), then we should replace the original Index with a different one. But what about indices that have the same meaning, but which we do not want the ITensor
* operation to contract? (Recall that taking the
* product of two ITensor contracts all pairs of matching indices.)
For example, consider the @@S_z@@ operator. This operator has two indices:
When acting @@S_z@@ onto a wavefunction, we only want one of the indices (the bottom one in the diagram) to be contracted, with the top one remaining uncontracted. Setting the prime level of the top index to 1 enforces this behavior, assuming the wavefunction has unprimed indices.
We will detail in this article how to use primes on indices and best practices.
When not to use Primes
An example where we might want to introduce a new
Index, rather than priming an existing one, is when coarse graining a lattice (in some real-space RG procedure). If we go from a fine lattice to a coarser one, then we have a diagram that looks like the diagram on the left:
The local physics has changed on the coarser level. A single site now corresponds to two sites of the original lattice and could have a dimension ranging from 1 to the square of the original lattice dimension. If the new coarse-grained lattice happens to have the same local dimension as the old, it might be tempting to "recycle" the index
s1 by using a primed copy of it as the new site index (as in the diagram on the right). But in light of the new physics, we should just introduce a new
Index (left) instead of using a primed version of an existing
Index (right).
Functions that Prime Indices
In the following, we use "ITensor" to refer to either ITensors or IQTensors, since they have the nearly same interface. For an exhaustive list of all class methods and functions for manipulating prime levels, see the documentation for ITensor objects.
Let us look at some of the most common functions used to manipulate ITensor prime levels. These functions all return an copy of the ITensor with modified prime levels. (ITensors are inexpensive to copy, and only copy their data when absolutely needed.)
Some priming functions act on all the indices:
prime(ITensor)- Return a copy of the ITensor with all index prime levels incremented by one
prime(ITensor,int)- Increment the prime level of each index by "int" (can also be negative)
You can also just raise the prime level of indices of a given IndexType
For example,
prime(psi,Site) raises the prime level of all indices of type
Site.
Indices can be given custom IndexTypes such as Link, Site, etc.
prime(ITensor,Index,int)- Changes the prime level of the specific index "Index" on an ITensor by value "int"
prime(ITensor,Type,int)- Increment the prime level of all indices having type "Type" by "int".
To reset all prime levels back to zero, use:
noprime(ITensor)- sets prime level of an ITensor to zero
Sometimes it is convenient to refer to indices by their current prime level. To turn indices of prime level "inta" into indices of prime level "intb", use:
mapprime(ITensor, inta, intb)- return an ITensor with indices having level inta mapped to level intb.
An Exercise in Priming: Unitary Rotations
To illustrate how the priming system may be used, we will show a matrix multiplication
and how this works with ITensor's priming system. First, let's convert the expression into tensors all of rank two:
This form is perfectly acceptable on paper, but we need to convert it to something that ITensor can understand. Handling the indices @@\alpha@@ , @@\beta@@ , @@\gamma@@ , and @@\delta@@ are handled very simply in ITensor. Note that we only include one index in ITensor to account for all four indices that appear above! This is managed by using different priming levels appropriately.
The first task is to initialize the necessary
Index variables needed
auto i = Index("index i",2);//The number '2' for a rank two tensor (could be any size)
The way in which we will allot the index with different primes can be seen by rewriting the above sum with only
i and primes as
The next tasks is to initialize the ITensors themselves (here, @@\mathcal{H}@@ and @@U@@ ).
auto U = ITensor(i,prime(i)), H = ITensor(prime(i),prime(i,2));
In the full example below, we initialize numbers into these tensors making
U a unitary matrix (for practical purposes, the @@\dagger@@ acts to switch the indices and take the complex conjugate,
conj), but we are most concerned with the priming system for now. Once we have set up our tensors, the entire matrix multiplication takes place in one step
auto ans = U*H*prime(swapPrime(conj(U),0,1),2);
The command
swapPrime takes a rank two tensor and interchanges the prime level for both indices. This takes a matrix @@U_{ii'}@@ and returns @@U_{i'i}@@ . This returns a matrix with the first index at prime level 0 and the second index at prime level 3. If we want to change the prime level of the second index, we can use the command
ans.mapprime(3,1);
#include "itensor/itensor.h" using namespace itensor; int main() { const auto PI_D = 3.1415926535897932384; Real theta = PI_D/4; auto i = Index("index i",2); auto U = ITensor(i,prime(i)),//U with i and i' H = ITensor(prime(i),prime(i,2));//H with i' and i'' U.set(i(1),prime(i(1)),cos(theta));//generates a unitary matrix U.set(i(2),prime(i(1)),-sin(theta)); U.set(i(1),prime(i(2)),sin(theta)); U.set(i(2),prime(i(2)),cos(theta)); H.set(prime(i(1)),prime(i(1),2),0.);//The matrix H (Pauli matrix "x") H.set(prime(i(2)),prime(i(1),2),1.); H.set(prime(i(1)),prime(i(2),2),1.); H.set(prime(i(2)),prime(i(2),2),0.); println(U);//series of prints shows evolution of priming println(swapPrime(U,0,1)); println(prime(swapPrime(U,0,1))); println(prime(swapPrime(conj(U),0,1),2)); auto ans = U*H*prime(prime(swapPrime(U,0,1))); println(ans);//shows ans with i and i''' indices ans.mapprime(3,1); println(ans);//shows action of mapprime println(mat.real(i(1),prime(i(1))));//prints out the Pauli "z" matrix println(mat.real(i(2),prime(i(1)))); println(mat.real(i(1),prime(i(2)))); println(mat.real(i(2),prime(i(2)))); return 0; }
Another Exercise in Priming: TRG
For educational purposes, this will not take the most direct path to the solution. We will use all the functions listed above.
We begin by defining four tensors:
auto x = Index("x0",2,Xtype),//declares x0 as type Xtype y = Index("y0",2,Ytype),//declares y0 as type Ytype x1 = prime(x,1), x2 = prime(x,2), y1 = prime(y,1), y2 = prime(y,2); ITensor S1(x1,prime(x0,2),y0); ITensor S2(y1,x0,y0); ITensor S3(x1,x0,prime(y0,2)); ITensor S4(y1,prime(x0,2),prime(y0,2));
This pattern of tensors appears in a TRG calculation and this represents the final operation before updating the tensor for the next renormalization group step. In order to get a better idea of what these tensors look like, we should draw a picture:
Right now, that's not what we want. If we contracted all the tensors (
S1*S2*S3*S4), then we'd get a scalar. We want the diagrams to contract just as they are drawn: horizontal lines with horizontal lines, vertical lines with vertical lines. We also want some extra double primes on some of the level 1 indices (
x1,
y1). The tensors come out of the program this way and are a necessity in the TRG algorithm. We must manipulate the primes to get the correct contraction:
For educational purposes, let's prime one index:
prime(S1,y0);
and then unprime it
prime(S1,y0,-1);
This returns us to the original diagram.
Now we get serious. Let's remove all primes from
S4:
A *= noprime(S4);//or mapprime(S4,2,0)
This is a promising step considering the contractions we eventually want. We then contract with
S3:
A *= S3;
Now we prime twice the
Xtype indices on the next contraction:
A = prime(A,Xtype,2);
Let's prime
S2 twice and contract it with
A
A *= prime(S2,2);
The dotted line contracted on
*. The last step is to contract
S1 and this gives us the correct result.
A *= S1;
A more straightforward code would be:
auto l13 = commonIndex(S1,S3);//Obtains a common index from both S1 and S3 A = S1 * noprime(S4) * prime(S2,2) * prime(S3,l13,2);
|
http://itensor.org/docs.cgi?page=tutorials/primes
|
CC-MAIN-2019-18
|
refinedweb
| 1,570
| 57.81
|
A ContextManager for keeping threaded output associated with a cell, even after moving on.
import sys import threading import time from contextlib import contextmanager
# we need a lock, so that other threads don't snatch control # while we have set a temporary parent stdout_lock = threading.Lock() @contextmanager def set_stdout_parent(parent): """a context manager for setting a particular parent for sys.stdout the parent determines the destination cell of output """ save_parent = sys.stdout.parent_header with stdout_lock: sys.stdout.parent_header = parent try: yield finally: # the flush is important, because that's when the parent_header actually has its effect sys.stdout.flush() sys.stdout.parent_header = save_parent
Just use this tic as a marker, to show that we really are printing to two cells simultaneously
tic = time.time()
class counterThread(threading.Thread): def run(self): # record the parent when the thread starts thread_parent = sys.stdout.parent_header for i in range(3): time.sleep(2) # then ensure that the parent is the same as when the thread started # every time we print with set_stdout_parent(thread_parent): print i, "%.2f" % (time.time() - tic)
for i in range(3): counterThread().start()
0 2.05 0 2.05 0 2.05 1 4.05 1 4.05 1 4.05 2 6.06 2 6.06 2 6.06
for i in range(3): counterThread().start()
0 2.07 0 2.07 0 2.07 1 4.07 1 4.07 1 4.08 2 6.08 2 6.08 2 6.08
|
http://nbviewer.ipython.org/gist/minrk/4563193
|
CC-MAIN-2015-48
|
refinedweb
| 244
| 56.52
|
One reader reported intermittent socket exceptions. I suspect this to be a result of his server insisting on authentication and my simplistic unauthenticated Send blindly continuing with inappropriate responses until the server drops the connection.
Send
To address this situation, I have extended EmailMessage to support ESMTP authentication. The new method signature is EmailMessage.Send(host,port,username,password) and this method should make the class far more useful in corporate environments that typically do not permit unauthenticated relay.
The original socket based Send was naive and bereft of error checking; that said, it worked faultlessly in all the test environments available to me. Nevertheless, the source code including the test harness has been updated, and both of the socket based Send methods now implement a fair bit of error checking, as well as being thoroughly instrumented to write the proceedings of the entire socket conversation to the debug console.
The authenticated Send method even checks whether the basic authentication method is available from the server and throws a very specific exception when the required protocol is not available. This is because the SMTP server provided with WinXP purports to support ESMTP but in fact does not support basic authentication.
If you want maximum throughput, then once you have this bedded down in your environment, I suggest compiling a release build or commenting out the instrumentation.
Automated email falls into two primary categories (apart from spam).
Commercial communications require a higher standard of presentation. There are various ways to accomplish this. The most robust in the face of recipient technical incompetence is HTML formatted messages.
Microsoft supplies SmtpServer and MailMessage classes in the DNF. For plain-text messages, these are satisfactory. If you run a quick search on these two classes, either on the web or in a newsgroup, you will find a great many people who want to know how to compose the payload from a URL (something you can do with CDO).
SmtpServer
MailMessage
Why would you want to do this? Well, an ASP or ASPX page can take parameters and talk to a database, and on the whole represents a very sophisticated mail merge engine; it can even do localisation.
Unfortunately, for commercial communications, SmtpServer and MailMessage are useless. You can assign MailMessage.Body a string containing HTML, but unless this stands alone - no references to graphics or linked stylesheets - the message will be a mess when read offline. Even if it's read with a live net connection, the references had better be absolute or they aren't going to work.
MailMessage.Body
CDO will composite the payload of an email from a URL, embedding stylesheets and graphics as a multipart MIME stream and doing fixups of the URLS to refer to MIME content identifiers.
This, however, involves COM interop and can get mucky. Also, the MimeOLE COM object (also used by Internet Explorer, Outlook Express and Outlook) has a few bugs. In particular, Microsoft forgot about the possibility that a stylesheet might also refer to files, for example, a background watermark image is typically specified in this way.
Moreover, link references to stylesheets embedded as independent content in a multipart mime stream do not work with OWA (Outlook Web Access). The solution here is to compose all the linked stylesheets into a single stylesheet and then embed this in a STYLE block in the HEAD block of the message. This prevents OWA from omitting it, although Hotmail manages to strip it out (Hotmail is evil).
The upshot of all this is that a class is required to subsume the functionality of MailMessage and SmtpServer, but which can composite the payload of the message from a URL, consolidating stylesheets and embedding them directly into the HTML, non-redundantly mime encoding graphical content and performing fixups on the references.
A miracle occurs as detailed in the source code. I won't go into MIME encoding or message formats. This stuff is all in RFC1521 if you care. One gotcha: boundary markers have two leading dashes that boundary declarations don't have. This is often not obvious because common practice puts a lot of dashes at the start of the boundary marker.
There are two ways to palm off the message once it's ready to send. One is to write it as a file into the pickup directory of a MTA (mail transport agent) and the other is to establish a socket to an SMTP relay host on port 25 and have a little chat about headers and payload in SMTP-ese. EmailMessage.Send() implements both methods as overloads. If you supply a string and an integer, it assumes you are passing host, port. If you pass just a string, it assumes you are passing the pickup folder path. UNC is acceptable.
Files into a pickup folder has the benefit of speed; some relay hosts throttle their connections so as not to choke the network (die spammers die we hate you) and you may find it faster to write files. Also, your mail host may be configured not to allow SMTP relay. Files from the pickup folder don't count as relay; they've originated inside the system.
Par contre, you may not have file system access to the mail host's pickup folder, making sockets a better option. If you don't have either, it's time for a chat with your friendly local sysadmin. If your process lives physically on the mail server, you could point out that loopback is actually defined for a whole class A subnet (127.x.x.x) so he can allow relay for say 127.53.103.113 (he'd probably rather do that than grant filesystem write permission for a web app).
I've supplied it as source for an assembly, although I haven't bothered to strong-name it. This is because it's the sort of thing you'll want to add to various projects. If you were to involve my code in an abomination like cut-and-paste inheritance, you might traumatise it, and then I'd be forced to hunt you down.
Here's some code from the test harness.
using System;
using System.Collections.Specialized;
using System.Configuration;
using pdconsec.EmailSupport;
namespace ConsoleApplication1 {
class Class1 {
[STAThread]
static void Main(string[] args) {
NameValueCollection AS =
ConfigurationSettings.AppSettings;
em.RecipientAddress = AS["RecipientAddress"];
em.RecipientName = AS["RecipientDisplayName"];
em.SenderAddress = AS["SenderAddress"];
em.SenderName = AS["SenderDisplayName"];
em.Subject = AS["EmailSubject"];
em.Url = AS["URL"];
em.Send(AS["SmtpServer"],25);
//em.Send(AS["SmtpServer"],25,"ausername","apassword");
//em.Send(@"C:\InetPub\MailRoot\Pickup");
}
}
}
You are expected to rewrite the app.config to point at your local SMTP relay host and supply valid email addresses. XP comes with one. Install it, it's handy for testing stuff like this. You don't have to keep it running all the.
|
http://www.codeproject.com/Articles/9637/SMTP-MailMessage-done-right
|
CC-MAIN-2014-49
|
refinedweb
| 1,138
| 55.84
|
I've got a web app running on Amazon AWS and it's using Amazon SES to send emails. One of the things I really want this system to do is to be able to forward emails to certain addresses from anyone who mails it, for example if some random person emails contact@mysite.com then I want it to forward to my personal email address.
I tried to do this with aliases, but it doesn't work due to restrictions in Amazon SES which disallow me from sending mail from un-verified senders. It works fine if I email contact@mysite.com from an email address that is verified, but not if I email from an unverified address.
So, what I'd like to try to do is rewrite the From and Reply-To headers on any messages that are sent to contact@mysite.com (and also on other addresses, such as support@mysite.com). I'd like to move the original From header to the Reply-To header, and then change the From header to no_reply@mysite.com (or something like this). For example:
From: "Bruce Wayne" <bruce@batman.com> To: "MySite Support" <support@mysite.com>
Would become
From: "Mail Services" <no_reply@mysite.com> Reply-To: "Bruce Wayne" <bruce@batman.com> To: "MySite Support" <support@mysite.com>
I have support@mysite.com defined in the aliases file, and it transforms it into my personal email. I'm not sure how the aliases stuff will be affected by header_checks or whatever is needed to solve this.
There is also a catchall alias which sends other emails into my webapp, so this whole header-rewriting stuff is really something that I only need/want for very specific addresses as described above.
Rewriting a single header using regexp seems simple enough, but the things I'm unsure about here are
- conditionally rewriting based upon the address of the To header and
- rewriting the Reply-To header based upon the value of the From header.
Can anyone give me some pointers on how to deal with these two problems? Or if I'm looking at this problem the wrong way, any direction would be much appreciated.
UPDATE - I somehow missed that there are if/endif statements in header_checks. I'm trying to do something like this, but my header checks don't appear to be doing anything:
if /^To: (support@mysite.com|contact@mysite.com)/ /^From:(.+@.+).*$/ PREPEND Reply-To:$1 /^From:(.+@.+).*$/ REPLACE From: no_reply@mysite.com endif
I've added the following line to main.cf:
header_checks = pcre:/etc/postfix/header_checks
It's unclear whether I'm supposed to postmap this file. I tried both doing it and not doing it and it made no difference.
Is there something obvious that I'm doing wrong here?
|
https://serverfault.com/questions/501722/conditionally-rewriting-from-and-reply-to-headers-in-postfix
|
CC-MAIN-2019-51
|
refinedweb
| 464
| 66.13
|
Life Science
University Preparation
e-Prep Course
Life Science is one of the ten special-designed ePrep courses by NTU to help NSFs, NSMen, and others to better prepare for their university studies, whether in the local universities in Singapore or foreign universities.
This Life Science ePrep course is developed in collaboration with the textbook publishers, Cengage, and is based on the popular textbook “Biology: Concepts and Applications” by Cecil Starr, CA Evers, and Lisa Starr, currently in its 10th edition. A hard copy of the textbook is provided to every student at no additional cost, and it also comes with excellent learning materials on life science provided by the textbook publishers. Due to time constraints, the focus of this Life Science ePrep course is on Genetics for the purpose of certification. However, materials on all the remaining 39 chapters are also provided.
To help the students to have a more complete academic preparation for their university studies, there are also lots of materials on other subjects such as physics, calculus, statistics, and other branches of mathematics, business finance, corporate finance, engineering economy, business ethics, engineering ethics, psychology, Python programming, etc., so that the students not only get to build up a strong foundation on life science, they also get to strengthen their knowledge on many other subjects as well. Samples of the materials on life science and other subjects provided can be found below. Most of these materials can be downloaded for later studies.
There is also a retired NTU professor acting as the tutor. He can be reached via email or WhatsApp messaging, not only during the course but beyond till the students start their university studies and have their own university tutors, and even beyond when the students require more assistance, not just regarding their academic learning, but on careers and other matters as well.
Audio : Intro to Life Science ePrep Course
Life Science ePrep Course – Main Contents
I. Compulsory Chapters
1 DNA Structure and Function
1 The Discovery of DNA’s Function
1.1 Describe the four properties required of any genetic material.
1.2 Summarize the classic experiments of Griffith, Avery, and Hershey and Chase that demonstrated DNA is genetic material.
2 Discovery of DNA’s Structure
2.1 Summarize the events that led to the discovery of DNA’s structure.
2.2 Identify the subunits of DNA and how they combine in a DNA molecule.
2.3 Explain how DNA holds information.
2.4 Describe base pairing.
3 Eukaryotic Chromosomes
3.1 Describe the way DNA is organized in a chromosome.
3.2 Explain how a eukaryotic cell’s chromosomes carry its genetic information.
3.3 Explain the meaning of diploid.
3.4 Distinguish between autosomes and sex chromosomes.
4 How Does a Cell Copy Its DNA?
4.1 State the purpose of DNA replication and describe the process.
4.2 Explain the role of primers in DNA replication.
4.3 Describe nucleic acid hybridization.
4.4 Describe semiconservative hybridization.
4.5 Explain why DNA replication proceeds only in the 5′ to 3′ direction.
5 Mutations and Their Causes
5.1 Using examples, explain how mutations can arise.
5.2 Describe two cellular mechanisms that can prevent mutations from occurring.
6 Cloning Adults Animals
6.1 Using a suitable example, explain the process of cloning an adult animal.
6.2 Explain why clones can be produced from a single body cell of an adult.
6.3 Describe the process of differentiation.
2 From DNA to Protein
1 Introducing Gene Expression
1.1 Compare the composition and structure of DNA and RNA.
1.2 Describe the flow of information during the process of gene expression.
2 Transcription: DNA to RNA
2.1 Describe the process of transcription.
2.2 Compare DNA replication with transcription.
2.3 Explain three types of post-transcriptional modifications.
3 RNA and the Genetic Code
3.1 Describe codons and give some examples.
3.2 Explain the signals that start and stop translation.
3.3 Explain how an mRNA specifies the order of amino acids in a polypeptide.
3.4 Summarize the role of rRNA and tRNA in translation.
4 Translation: RNA to Protein
4.1 Explain the roles of mRNA, tRNA, and rRNA in translation.
4.2 Describe the way a polypeptide lengthens during translation.
5 Consequences of Mutations
5.1 Describe three types of mutations.
5.2 Explain how mutations can affect protein structure.
5.3 Using appropriate examples, explain why some mutations are not harmful.
3 Control of Gene Expression
1 How Cells Control Gene Expression
1.1 Explain what is meant by gene expression control.
1.2 Give two reasons why control of gene expression is necessary.
1.3 Describe transcription factors and give some examples of the different types.
1.4 List the points of control over expression of a gene with a protein product.
2 Orchestrating Early Development
2.1 Describe the relationship between master genes and differentiation during embryonic development.
2.2 Explain the function of homeotic genes and give some examples.
2.3 Using an appropriate example, explain how experiments with homeotic genes offer evidence of shared ancestry among species.
3 Details of Body Form
3.1 Describe a few examples of gene control in eukaryotes.
3.2 Explain why only one X chromosome is active in cells of a female mammal.
4 Gene Expression in Metabolic Control
4.1 Using the lac operon as an example, describe gene control in bacteria.
4.2 Compare gene expression control in single-celled and multicelled organism.
4.3 Explain why most human adults are lactose intolerant.
5 Epigenetics
5.1 Explain why a DNA methylation is passed to a cell’s descendants.
5.2 List some environmental factors that affect DNA methylation patterns.
5.3 Describe epigenetic modification of DNA and give some examples.
4 How Cells Reproduce
1 Multiplication by Division
1.1 Describe the main events in the eukaryotic cell cycle.
1.2 Explain how mitosis maintains the chromosome number.
1.3 Explain the difference between sister chromatids and homologous chromosomes.
1.4 Identify two body processes that require mitosis.
1.5 Describe the purpose of cell cycle checkpoints.
2 A Closer Look at Mitosis
2.1 List the stages of mitosis in order and the main events that occur during each stage.
2.2 Describe the role of microtubules in mitosis.
2.3 Explain how mitosis packages two sets of chromosomes into two new nuclei.
3 Cytoplasmic Division
3.1 Define cytokinesis and explain why it is necessary.
3.2 Describe and compare cytokinesis in an animal cell and a plant cell.
4 Marking Time with Telomeres
4.1 Illustrate the function of telomeres in cell division.
4.2 Explain why normal body cells are not immortal.
4.3 Describe the fail-safe function of a cell division limit.
5 Pathological Mitosis
5.1 Explain why mutations in growth factor genes can give rise to neoplasms.
5.2 Use an example to describe tumor suppressors.
5.3 Describe the three hallmarks of malignant cells.
5 Meiosis and Sexual Reproduction
1 Why Sex?
1.1 Explain why homologous chromosomes may carry different alleles.
1.2 List the differences between sexual reproduction and asexual reproduction.
1.3 Describe how genetic diversity makes a population more resilient to environmental challenges.
2 Meiosis in Sexual Reproduction
2.1 Describe the relationship between germ cells and gametes.
2.2 Explain how meiosis reduces the chromosome number, and why this is a necessary part of sexual reproduction.
3 A Visual Tour of Meiosis
3.1 Explain the steps of meiosis in a diploid (2n) cell.
3.2 Describe the major differences between meiosis I and meiosis II.
4 How Meiosis Introduces Variations in Traits
4.1 Describe crossing over and how it introduces variation in traits among the offspring of sexual reproducers.
4.2 Explain the random nature of chromosome segregation during gamete formation, and its significance in terms of genetic variation.
5 Mitosis and Meiosis—An Ancestral Connection?
5.1 Describe the similarities and differences between mitosis and meiosis II.
5.2 Support the argument that meiosis might have evolved from mutations in the process of mitosis.
II. Optional Chapters (Not Needed for Certification)
- The Science of Biology
- Life’s Chemical Basis
- Molecules of Life
- Cell Structure
- Ground Rules of Metabolism
- Where It Starts—Photosynthesis
- Releasing Chemical Energy
- Patterns in Inherited Traits
- Human Inheritance
- Biotechnology
- Evidence of Evolution
- Processes of Evolution
- Life’s Origin and Early Evolution
- Viruses, Bacteria, and Archaea
- …See the complete list of Topics and Learning Objectives
What You Get in this Life Science ePrep Course
I. Free Textbook
“Biology: The Unity and Diversity of Life” is a very popular textbook on Biology and other Life Sciences, authored by Cecil Starr, R Taggart, CA Evers, and Lisa Starr, 15th Ed.
II. Free Consultation
A dedicated retired NTU professor is acting as the tutor. You can consult him via email or WhatsApp. A retired professor has a lot more time and energy for you than a full-time professor who has to struggle with heavy teaching and administrative workloads on top of the need to source for research grants and the pressure to perform cutting-edge research.
III. Materials Online
1 Notes, video lessons, and PowerPoint files.
2 Answers/solutions to all questions/problems in the textbook.
3 Online exercises.
4 Bonus learning materials in various branches of mathematics, including calculus, algebra, probability and statistics, as well as on other subjects such as business finance, corporate finance, engineering economy, physics, mechanics, ethics, economics, python programming, and psychology.
IV. Digital Certificate
A digital certificate will be issued if you have completed this NTU Life Science ePrep course and passed all the tests at the end of each of the ten compulsory chapters.
Life Science ePrep Course – Sample Materials
1. Illustrative Video
This video illustrates the translation process whereby a polypeptide chain is assembled from amino acids in the order specified by an mRNA. There are many such video lessons that illustrates the various life science principles and processes.
2. Core Concepts:
All organisms alive today are linked by lines of descent from shared ancestors.
Cells of all multicelled eukaryotes reproduce by mitosis and cytoplasmic division.
Together, these processes are the basis of growth, tissue repair, and asexual reproduction.
Because mitosis links one generation of cells to the next, it is a mechanism by which the continuity of life occurs.
All multicelled organisms use the same molecules to drive and regulate their cell cycle.
Note: This is one of the Core Concepts in the chapter on DNA Structure and Function – for more listing of core concepts in the chapter, see here.
3. Key Terms
- Bacteriophage: virus that infects bacteria.
- DNA sequence: order of nucleotides in a strand of DNA.
- Autosome: a chromosome that is the same in males and females.
- Centromere: of a duplicated eukaryotic chromosome, constricted region where sister chromatids attach to each other.
- Chromosome: a structure that consists of DNA and associated proteins; carries part or all of a cell’s genetic information.
- Chromosome number: the total number of chromosomes in a cell of a given species.
- Diploid: having two of each type of chromosome characteristic of the species (2n).
- Histone: type of protein that structurally organizes eukaryotic chromosomes.
- Karyotype: image of an individual’s set of chromosomes arranged by size, length, shape, and centromere location.
- Nucleosome: a length of DNA wound twice around a spool of histone proteins.
- Sex chromosome: member of a pair of chromosomes that differs between males and females.
Note: These are some of the Key Terms in the chapter on DNA Structure and Function – for more listing of key terms in the chapter, see here
4. Critical Thinking Question and Answer
Question:
Why are some genes expressed and some not?
Answer:
Some genes are expressed and some are not expressed, because not all gene protein products are needed by all cell types. In other words, a cell regulates its gene expression because it contains too many genes. For example, a brain cell and a liver cell contain the same exact DNA code (as do all the cells found within a multicellular organism); however, a brain cell needs different proteins to function properly than do liver cells. Therefore, to make their unique set of protein products, certain genes are expressed and certain genes are repressed. So, one compromise of being multicellular is that every cell contains every gene the organism needs to live. However, the majority of these genes are not needed by any specific cell at a specific time.
Note: In addition to standard types of questions, there are also many Critical Thinking Questions that encourage greater thinking and deeper understanding.
5. Data Analysis
Answer:
- Skin cancer cells had the greatest response to an increase in concentration of the engineered RIPs.
- One hundred percent of all the cell types were alive at 10−7 g/l of engineered RIP.
- Breast cancer cells showed the greatest survival at 10−6 g/l of engineered RIP.
- The engineered RIPs had the least effect on breast cancer cells.
Note: The students also learn to perform Data Analysis which is one of the modern approaches to learning Life Science.
Sample Bonus Materials – Beyond Life Science
1. Video Lecture on Statistics
This short video clip is about the computation of the correlation coefficient, r, between two variables. This is one of the many video lessons on statistics provided at the course site so that a student who signs up for this Life Science ePrep course can also learn statistics.
2. Cross Word Puzzle Solution on Biotechnology
3. Question and Answer on Engineering Economy (Cost Estimation). On the basis of equal cost per unit production, what monthly salary is justified for B if the foreman gets $3,800 per month and A gets $3,000 per month?
Answer:.
Note: This is one of the many questions and answers on engineering economy provided at the course site so that a student who signs up for this Life Science ePrep course can also learn engineering economy.
4. Video Lessons on Business Finance (Efficient Markets)
This short video lesson discusses the efficient financial markets and the two conditions that must be satisfied for a market to be considered efficient – (1) market at equilibrium with equal number of buyers and sellers and (2) asset traded at intrinsic value as all players have accessed to all the same available information.
This is one of the many video lessons on business finance provided at the course site so that a student who signs up for this Life Science ePrep course can also learn business finance.
5. Worked Example on Engineering Economy (Cost Concepts and Design Economics)
Question:
A company produces and sells a consumer product and is able to control the demand for the product by varying the selling price. The approximate relationship between price and demand is
p=$38 + 2,700/D – 5,000/D2, for D>1,
where p is the price per unit in dollars and D is the demand per month. The company is seeking to maximize its profit. The fixed cost is $1,000 per month and the variable cost (cv) is $40 per unit.
a. What is the number of units that should be produced and sold each month to maximize profit?
b. Show that your answer to Part (a) maximizes profit.
Answer:
Note:This is one of the many worked examples on engineering economy provided at the course site so that a student who signs up for this Life Science ePrep course can also learn engineering economy.
6. Objective Question Exercise on Physics (Energy of a System)
Question:
A certain spring that obeys Hooke’s law is stretched by an external agent. The work done in stretching the spring by 10 cm is 4 J. How much additional work is required to stretch the spring an additional 10 cm?
1. None
2. 2 J
3. 4 J
4. 8 J
5. 12 J
6. 16 J
Answer (5).
4.00 J = ½ k(0.100 m)2.
Therefore k = 800 N/m and
to stretch the spring to 0.200 m requires extra work: ΔW = ½ (800)(0.200)2 − 4.00 J = 12.0 J.
Note: This is one of the many objective questions exercises on physics provided at the course site so that a student who signs up for this Life Science ePrep course can also learn physics.
7. Web Exercises on Psychology (Sensation and Perception)
Question:
After living on a busy street for months, Jay-Z didn’t notice the noise, but then one day the street was closed to traffic and the unusual silence made Jay-Z look out the window. This is an example of
(a) unlearning
(b) dishabituation
(c) motivation
Answer: (b)
Note: This is one of the many web exercises on psychology provided at the course site so that a student who signs up for this Life Science ePrep course can also learn psychology.
8. Video Lecture on Maths for Mgr, Life and Soc Sc (Exponential Functions)
This video lesson discusses the solving of an exponential decay problem, using a half-life problem for illustration.
This is one of the many video lessons on mathematics provided at the course site so that a student who signs up for this Life Science ePrep course can also learn mathematics.
9. Video Lesson on Mechanics (Work and Energy)
This short video lesson discusses power as the rate at which a work is performed, or the rate at which an energy is transmitted.
This is one of the many video lessons on mechanics provided at the course site so that a student who signs up for this Life Science ePrep course can also learn mechanics.
10. Python Programming (Bubble Sort)
Code:
def bubble_sort(array):
n = len(array)
print(“original array”, array)
for i in range(n):
already_sorted = True
for j in range(n – i – 1):
if array[j] > array[j + 1]:
array[j], array[j + 1] = array[j + 1], array[j] #swap
already_sorted = False #there was a swap
print(“array after i=”, i, “and j=”, j, array)
if already_sorted: #if no more swap
break
return array
oArray = [2, 8, 9, 4, 26, 82, 56, 43]
nArray = bubble_sort(oArray)
print(“Sorted array”, nArray)
Output:
original array [2, 8, 9, 4, 26, 82, 56, 43]
array after i= 0 and j= 2 [2, 8, 4, 9, 26, 82, 56, 43]
array after i= 0 and j= 5 [2, 8, 4, 9, 26, 56, 82, 43]
array after i= 0 and j= 6 [2, 8, 4, 9, 26, 56, 43, 82]
array after i= 1 and j= 1 [2, 4, 8, 9, 26, 56, 43, 82]
array after i= 1 and j= 5 [2, 4, 8, 9, 26, 43, 56, 82]
Sorted array [2, 4, 8, 9, 26, 43, 56, 82]
Note:
The 1st interchange is between array[2] and array[3] as 9 is greater than 4
The 2nd interchange is between array[5] and array[6] as 82 is greater than 56
The 3rd interchange is between array[6] and array[7] as 82 is greater than 43
Notice that after the first round (i=0), the last element is the greatest number
The 4th interchange is between array[1] and array[2] as 8 is greater than 4
The 5th interchange is between array[5] and array[6] as 56 is greater than 43
Notice than after the second round (i=1), the last but one number is the second largest number
During the third round, there is no swapping at all, implying the array is already sorted and the sorting ends.
11. Economics (The Consumer’s Optimal Choices)
1. The consumer would like to end up on the highest possible indifference curve, but he must also stay within his budget.
2. The highest indifference curve the consumer can reach is the one that just barely touches the budget constraint. The point where they touch is called the optimum.
3. The optimum point represents the best combination of cola and pizza available to the consumer.
- The consumer would prefer point A, but he cannot afford that bundle because it lies outside of his budget constraint.
- The consumer could afford bundle B, but it lies on a lower indifference curve and therefore provides less satisfaction.
4. At the optimum, the slope of the budget constraint is equal to the slope of the indifference curve
- The indifference curve is tangent to the budget constraint at this point.
- At this point, the marginal rate of substitution is equal to the relative price of the two goods.
- The relative price is the rate at which the market is willing to trade one good for the other, while the marginal rate of substitution is the rate at which the consumer is willing to trade one good for the other
12. Discrete Mathematics (Equality of Mathematical Expressions)
Question:
Solution:
Remark:
Above are some samples of the bonus materials on other subjects. They demonstrate how comprehensive and broad-base this Life Science e_Prep course is for helping students to be better prepared for their university studies or their careers. It is to be expected that not all the bonus course materials are of use to the students who take up this academic university e_prep course on Life Science, but they can choose which of these bonus course materials are relevant to them and ignore the rest.
Remember not to short-change yourself – do not go for any of those low-grade courses prepared by any “Tom-Dick-And-Harry” who self-claim to be an industry expert, especially if you are preparing for further academic studies or career advancement! You do not need thousands of such courses. As you can see, with this single Life Science course, you have a good suite of materials on many other subjects as well. You also get a hard copy of the Biology – Concept and Applications textbook.
It is prudent to go only for a high-quality specially-designed academic course such as this Life Science electronic prep course for getting you a head start in university, or your career.
|
https://eprepcourses-sg.online/life-science-eprep/
|
CC-MAIN-2021-39
|
refinedweb
| 3,687
| 54.63
|
Naval Robocode
Intro
These are some of the things that make Naval Robocode different from Robocode.
The Rules are all subject to change, so be sure to tell me about any changes you’d like to see.
For the discussion on Google Groups:
If you wanna look into the (still slightly messy) code, go to:
And if you wanna dive straight into Naval Robocode, go to:
The above setup hasn't been forked with the official Robocode repository, yet.
Also, I didn't get the in-game editor to work for myself, so I recommend downloading the setup and then following this guide: Robocode/Eclipse/Create_a_Project
Don't forget to replace robocode.Robocode to robocode.NavalRobocode in Robocode/Running_from_Eclipse ~
Creating Your First Ship
Yarrr, matey! Welcome to creating your first fleet.
If ye be makin’ those automated tanks for the past years, ye might want to check out the Modified Rules.
Remember that ye can always steal one of the sample ships and use them as yer own.
I assume ye have yer men ready at the bay to build yer ship during this intro.
(I assume you got Naval Robocode all set up in Eclipse. If not, check out this guide: )
My First Ship
Away with the pirate act! Your Ship is much more like a marine ship than a pirate ship.
(Though, if anybody wants to re-skin the current ships to look like a pirate ship, be my guest!)
As stated before, I assume you’re creating your Ship in Eclipse.
(I couldn’t get the in-game editor to work, myself) On top of this, I assume you are at least a beginner at Java.
I will create a full guide for total beginners at programming later, but for now this guide will have to do.
package thoma; import robocode.*; public class MyFirstShip extends Ship{ public void run(){ setAhead(200); setTurnRightDegrees(90); } public void onScannedShip(ScannedShipEvent e){ fireFrontCannon(1); } }
Take a look at this example. This is your first Ship! When your Ship is first created, the run function will be called.(public void run())
Anything in between the squiggly brackets {} is what your Ship will do while it’s still alive and kickin’.
Within this run-function, two methods are called. setAhead() and setTurnRightDegrees().
setAhead(200) tells your ship to move ahead 200 pixels.
setTurnRightDegrees(90) tells your ship it should turn until it’s 90 degrees to the right of your Ship’s original direction.
Remember! Ships can’t turn if there’s no velocity, much like a real ship.
FIRE!
A Ship has two cannons, called FrontCannon and BackCannon.
Firing them is done by fireFrontCannon(your_gun_power) and fireBackCannon(your_gun_power) respectively.
What’s your_gun_power, you ask? Cannons can fire a bullet that’s stronger or weaker depending on the gun power. This power can be between 0.1 and 3.
Placing this method inside the onScannedShip(ScannedShipEvent e) method makes it so that the gun will only fire when the radar has seen another Ship.
Movement
Acceleration is still 1 pixel/turn^2
Deceleration is 0.8 pixels/turn^2 now (Gives a nice floaty effect)
At the moment the turn rate is 0.8 degrees/turn. Do you find your Ship is moving around in circles which are way too big?
Try decreasing your max velocity. The less you move per turn, the sharper your turns are.
Ships only start turning if there’s some velocity, unlike a Robot.
Bullets
Bullet damage has been decreased to
damage = 4 * bulletPower; if(bulletPower > 1){damage += 1.2 * (bulletPower - 1);}
Though. This might be a bit too low.-Thomas
Energy restored from the bullets has been reduced to
bulletPower * 2
I tried to make the bullets a bit weaker, since Ships are easier to hit.
This, on top of the fact Ships already got 2 cannons, made me think Ships died too quickly.
Bullets can be fired from two cannons. (FrontCannon and BackCannon)
At the moment you can fire them with the fireFrontCannon(double) and fireBackCannon(double) commands.
Mines
Mine damage is calculated as followed
damage = 3 * minePower;
if(minePower == 15){damage+=5;}
Energy restored from a mine is
MinePower * 3
Mines can be placed with a power between 5 and 15.
Ship
Length of a ship is 207. Width is 40.
The pivot of a Ship is NOT in the middle.
If you look at the picture on the right, the red dot will indicate the pivot of the Ship, which should be 50 pixels from the center of the Ship.
Furthermore, a Ship has two cannons, FrontCannon and BackCannon. (See the blue weapons on the Ship)
A Ship has a Radar which is exactly the same as a Robocode Radar.
And a Ship has a MineComponent, which is the darkgreenish thing at the back of the Ship. This is used to drop mines.
Each of these components can be colored.
The coordinates of each of these components can be retrieved with getXFrontCannon()/getYRadar() etc.
Ships work like AdvancedRobots for as far as I know. setTurnLeftDegrees() doesn’t have a turnLeftDegrees() equivalent. This means you’ll have to work with the execute()-function most of the time.
Miscaleneous
Ships don’t take hit damage from walls.
If anybody is interested, I accidentally implemented something that creates the possibility to take more or less damage depending on where your Ship has been hit.
(If they run head-first into eachother, they could take massive damage, where hitting from the side would result in less damage)
Right now the ramming damage is the same as in Robocode.
Things to look out for
At the moment there are still a few things I’m not too sure about. Here’s a list about them~
1. Acceleration and Deceleration
2. Size of BlindSpot. (Options->Preferences->View Options->Visible Naval Blind Spot)
3. Function names (Too large? Too unclear? Not enough documentation?)
4. Bugs
Known "Bugs"
1. In game editor doesn’t work (for me) since it can’t find a compiler.
2.
The game is still mixed with the original Robocode, thus there can still be played with Tanks. I made a slightly “dirty” fix to make sure Ships and Tanks can’t play together on the same map. This fix is basically “If there’s Ships on the map, IS_NAVAL = true, else IS_NAVAL = false”. If you record a game in Naval Robocode, boot up a fresh game and try to watch the recording, you’ll get a cool bug. Since the game hasn’t tested yet whether there are Ships on the Map, IS_NAVAL = false. So when you watch the recording, it’ll interpret the Ships in the recording as quirky moving Robots. (Watching a recording while IS_NAVAL=true works as intended) Fixed with TO-DO #1 #2
3. Making your weapons move independently from your ship, makes them move a bit quirky, because of their blind spots. Independent radars, however, move just fine.
We held a competition within the company and so far we haven’t found any bugs.(They couldn’t even find the bugs I already knew of)
They did, however, have some comments regarding the game itself.
1.
IShip::getBodyHeadingRadians does nor return values between 0..2*Pi;nor is specified in which range the values are to be expected. Specify, document and enforce that each return angle (radians) is between –pi .. pi or 0 and 2*pi (similar for the methods using degrees). Done.
2.
IShip::setCourse has no radians variant Done.
3. Create an interface for controlling the Front- and BackCannon. Calling something like getFrontCannon().setTurnRightDegrees(), might be nicer than setTurnFrontCannonRightDegrees(double).
4.
Implement some kind of blast-radius for the mines. (The winner of the competition basically just rammed people backwards to throw mines on their ships.) Created a new bug. Ships killing themselves with mines get bonus points :D
Thoma’s TO-DO List
1. Finding a way to separate the database into two. I mean, I’d love to get rid of the regular Robots in the list of bots to choose from. Though, this doesn’t isn’t top priority. Didn't split it up, just filtered the original database.
2. Creating a custom run configuration for Naval Robocode would be really nice. If you look back at “Known Bugs #3”, creating a new way to run Naval Robocode that would automatically set IS_NAVAL = true, would simultaneously get rid of bug #3. Moved the variable which states whether we're playing Naval Robocode or not to HiddenAccess.
3. Creating some clearer documentation.
4. Refactoring. Ooooh. I REALLY want to refactor this program. Background story: The assignment to create Naval Robocode has been given to an intern last year, as well. Feeling “lucky”, I worked from where he left of, since he didn’t get to finish it. I used to look at his code as if it couldn’t contain any mistakes, but as of right now, I want to get rid of most of the stuff he did. Which include:
• Trying to reverse all Y-values. If we take an 800x600 screen, according to him the top of the screen would be y=0, while the bottom of the screen would be y=-600. I understand that some game do this, but I never understood why he wanted to implement this so badly. Luckily, I got rid of most of this, though there might still be something left behind.
• The IProjectile-interface. It’s simply not needed. Worst part is that I used this interface for the mines. Done.
• The ShipStatus class. I made this one myself, but I don’t really like the fact I’m giving ShipProxy access to all every component on the Ship.
|
http://robowiki.net/w/index.php?title=Naval_Robocode&printable=yes
|
CC-MAIN-2017-26
|
refinedweb
| 1,620
| 75.1
|
@ZeroBit: Fixed, thanks :)
Search Criteria
Package Details: blueproximity 1.2.5-7
Dependencies (6)
- bluez-utils (bluez-git)
- librsvg (librsvg-git)
- pygtk
- python2 (placeholder, pypy19, python26, stackless-python2)
- python2-configobj>=4.7.0-1
- python2-pybluez>=0.14-2
Required by (0)
Sources (4)
- blueproximity-1.2.5.orig.patch
- blueproximity.desktop
- blueproximity.xpm
-
Latest Comments
erikw commented on 2015-05-13 16:08
ZeroBit commented on 2015-05-09 20:25
Download link doesn't work
Please, change it
erikw commented on 2015-01-10 14:48
@ancarius: Thanks, I've now refactored the PKGBUILD to use the newer package() function.
NullDivision commented on 2014-12-29 14:43
It would appear that there's no package() function defined in the latest PKGBUILD
[*******@****-******* blueproximity]$ makepkg
==> ERROR: Missing package() function in /home/*******/Downloads/blueproximity/PKGBUILD
erikw commented on 2014-04-18 15:49
Package updated to not require all imports to success, one bluetooth and one bluez is enough.
erikw commented on 2014-03-10 21:19
Changing proximity.py:103 from
import _bluetooth as bluez
to
import bluetooth as bluez
makes the program start-able. I'll take a closer look asap. Sorry for late replay; I'm busy doing my master's thesis atm.
bf_x commented on 2014-03-04 10:08
Same problem, package does not work as of now. @Erikw can you please have a look at this?
liujed commented on 2014-02-19 03:49
Updating to python2-pybluez-0.20-1 breaks this package. It appears that the import of _bluetooth at /usr/bin/proximity.py:103 is failing.
erikw commented on 2013-03-16 20:58
@techhat Dependencies are updates in the PKGBUILD!
techhat commented on 2012-11-11 00:32
Had to change python-configobj and python-pybluez to python2-configobj and python2-pybluez to get this to build.
FredericChopin commented on 2011-01-31 15:21
Sorry, missread your comment. Added librsvg to dependencies.
FredericChopin commented on 2011-01-23 09:57
I have no problem running blueproximity without libsvg. Could you tell me, where the dependency is?
daimonion commented on 2011-01-22 21:21
Please put librsvg as dependancy.
Anonymous comment on 2010-10-26 13:01
this package needs to be updated, as python3 is now the default version used.
an additional dependency is python2, the shebang /usr/bin/proximity.sh must be changed accordingly (#!/usr/bin/env python2) and in /usr/bin/start_proximity.sh "python" must be replaced by "python2" in line 4.
|
https://aur.archlinux.org/packages/blueproximity/?ID=14592&comments=all
|
CC-MAIN-2017-04
|
refinedweb
| 416
| 51.04
|
At times we need to show web content within our React Native application. In this post, I will cover how to integrate iFrame with a React Native project. Let’s get started
What is an iFrame?
I guess, most of us, the Web and React developers have heard about an iFrame. If you have not, then let me define it for you in layman’s term.
An iFrame lets you show or embed another web page within the current HTML page. And that web page can be a 3rd party web page from a different domain. For eg., you can load a inside your web page by using an iFrame.
As an example, I am loading an Open Street Map on my current page using an iFrame.
<iframe id="inlineFrameExample" title="Inline Frame Example" width="500" height="400" src=""> </iframe>
Getting Started
You can use something called a WebView to show web content inside a Native View. React Native provides a web view component. A WebView is kind of a stripped-down version of a browser (eg. it does not have the menu bar, a status bar that a normal browser has)
React Native core has a WebView component, but they are planning to remove it from the core React Native package. Instead, they recommend using the react-native-webview module/component from react-native-community
Installing react-native-webview
npm install --save react-native-webview
Once you install the component in your project, you can use it as shown below,
import React, { Component } from 'react'; import { WebView } from 'react-native-webview'; class MyInlineWeb extends Component { render() { return ( <WebView originWhitelist={['*']} source={{ html: '<h1>Hello world</h1>' }} /> ); } }
** For showing static HTML or inline HTML, you need to set originWhiteList to *.
How to integrate an iFrame in your React Native app?
Similar to the example above you can load an iFrame inside of a WebView. Let’s see an example below
import { WebView } from 'react-native-webview'; class MyInlineWeb extends Component { render() { return ( <WebView originWhitelist={['*']} source={{ html: "<iFrame src='your_URL' />" }} /> ); } }
So as shown in the example above, your iFrame is loaded inside the WebView component. You can then render any web page or URL inside your iFrame then.
You just saved my life
|
https://josephkhan.me/iframe-with-react-native/
|
CC-MAIN-2020-40
|
refinedweb
| 371
| 61.36
|
void LetsPlayAGame()
{
It’s time to tackle a more complex assignment. We will start with a simple card game, “Go Fish.”
If you’ve never played, take a look at the directions at. We will be using a simple version of these rules to make our game. And if you would like to download or view the completed code project, go to.
}
void ModelTheWorld()
{
Object-Oriented Programming is excellent for creating models of the real world, which, after all, is made up of objects. Let’s start by creating a model of a deck of cards. Actually, though, a deck of cards is just a collection of card objects, and we already know how to create collections! So, let’s start by creating a
card object. What defines a card?
- Suit – The color and symbol of the card. Hmm…maybe we need to represent suits as an object too. (In some other card games, the red suits are interchangeable, as are the black suits, so color might be important as well.)
- Rank – The number or name of the card.
One simple way to create a list of value options is to create an
enum. We will create two enums, one for
Suit:
namespace GoFish { public enum Suit { Hearts, Diamonds, Spades, Clubs } }
And one for
Rank:
namespace GoFish { public enum Rank { Ace, Two, Three, Four, Five, Six, Seven, Eight, Nine, Ten, Jack, Queen, King } }
In VS Code, go to
File => New File, and add the code above for
Suit. Save as
Suit.cs, and then repeat and create
Rank.cs. You should see these files added to your sidebar explorer, as well as the tabs at the top.
Now let’s create our
Card file, saving it as
Card.cs. Add the code below. Notice that this is now a class, out of which we can create all our cards. I’ve also created a property called
PluralName so that I can return a string representation of the rank in its plural form.
namespace GoFish { public class Card { public Suit Suit; public Rank Rank; public Card (Suit suit, Rank rank) { Suit = suit; Rank = rank; } public string PluralName { get { if (Rank == Rank.Six) { return Rank + "es"; } return Rank + "s"; } } } }
Once we have the
Card class, it is easy to build a collection of cards into a
Deck. But instead of creating a collection by hand, I am using
inheritance to create a special
SubClass of
List. Notice in the class declaration line,
: List<Card>. This means that a
Deck will inherit all the properties and methods of a
List, such as
Add and
Remove. But in addition, we can add our own methods, like
Shuffle and
Draw.
using System; using System.Collections.Generic; using System.Linq; namespace GoFish { public class Deck : List { readonly Suit[] suits = { Suit.Hearts, Suit.Clubs, Suit.Diamonds, Suit.Spades }; public Deck() { for (var i = 0; i < 13; i++) { foreach (var suit in suits) { Add(new Card(suit, (Rank)i)); } } } public void Shuffle() { var rnd = new Random(); for (var i = 0; i < Count - 1; i++) { Swap(i, rnd.Next(i, Count)); } } public void Deal(List players, int numberOfCards) { foreach (var player in players) { player.Hand = new List(); Draw(player, numberOfCards); } } public void Draw(Player player, int numberOfCards = 1) { for (var cardNum = 0; cardNum < numberOfCards; cardNum++) { player.Hand.Add(this.First()); RemoveAt(0); } } public Card Peek() { return this.First(); } void Swap(int a, int b) { var temp = this[a]; this[a] = this[b]; this[b] = temp; } } }
}
void ReadyPlayerOne()
{
We will create one more data model class, to signify a
Player. Very simply, each player needs to have a
Hand and
Sets, or laid down 4-of-a-kind matches. Rather than create subclasses like we did for Deck, Hand uses the default
List<Card> definition.
Sets, however, is a List of a List, which allows you to track multiple groups of cards.
using System.Collections.Generic; namespace GoFish { public class Player { public List Hand { get; set; } public List<list> Sets { get; } = new List<list>(); } }</list</list
In our next post, we will explore how to lay out the action of the game!
}
|
https://tocode.software/2018/08/26/lesson5_gofish_pt1/
|
CC-MAIN-2021-31
|
refinedweb
| 682
| 72.76
|
Opened 4 years ago
Closed 4 years ago
Last modified 3 years ago
#21351 closed Cleanup/optimization (fixed)
memoize function needs tests and improvements
Description
django.utils.functional.memoize(func, cache, num_args):
- lacks tests
- should raise exceptions when provided with illegal arguments (
funcmust be a function,
cachea dict, and
num_argsa positive integer)
- should raise an exception if any of the arguments of
func, which are used as dictionary keys, is unhashable
Attachments (2)
Change History (21)
comment:1 Changed 4 years ago by
comment:2 Changed 4 years ago by
I think you're right, it looks over-zealous. Do you think the third point's unnecessary as well?
comment:3 Changed 4 years ago by
There's actually a worse problem with
memoize, that Shai Berger has pointed out: it chokes on keyword arguments...
His example:
>>> from django.utils import functional as F >>> def f(a,b): return a+b ... >>> cache = {} >>> f = F.memoize(f, cache, 2) >>> f(3,4) 7 >>> f(3,b=4) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: wrapper() got an unexpected keyword argument 'b' >>>
comment:4 Changed 4 years ago by
Accepting this on the basis of missing tests alone (I changed
memoize to be a no-op and the full test suite runs without problems).
As for the other issues described, maybe we could build
memoize on top of
functools.lru_cache [1].
It's only available on python 3, but there is a backport available [2].
[1]
[2]
comment:5 Changed 4 years ago by
comment:6 Changed 4 years ago by
I don't know all the places that memoize is used, but I suspect making it more robust will also make it slower, and for a function whose entire reason for existence is performance, that would be a problem.
Currently,
memoize() is undocumented, and I think we should consider just making it explicitly private. Users don't really need it in this form (and when they do, they should probably base it on the cache framework).
comment:7 Changed 4 years ago by
Replacing
memoize by
lru_cache will probably take some work to get right, as
memoize allows one to provide a dict for caching.
lru_cache hides the cache from the user, only allowing the instrumented
cache_clear() for clearing the cache altogether.
comment:8 Changed 4 years ago by
As for the usage of
memoize:
$ git grep -c "memoize" django/contrib/staticfiles/finders.py:2 django/core/management/commands/loaddata.py:2 django/core/urlresolvers.py:4 django/utils/functional.py:2 django/utils/translation/trans_real.py:2 tests/decorators/tests.py:2
comment:9 Changed 4 years ago by
However, I have some questions about the deprecation and backport:
- Can the backport be included, as it is MIT licensed?
memoizeis an internal API, should it follow the deprecation policy?
- Is this the correct way to include a backport?
Update: using GitHub search [2], there's quite a few
from django.utils.functional import memoize. So we might need to properly deprecate the decorator.
[1]:
[2]:
comment:10 Changed 4 years ago by
As discussed on IRC, a few benchmarks comparing
memoize with
lru_cache and
lru_cache's backport. Also, another column with the statistics bookkeeping in the backport disabled for extra speed. These results are achieved with
lru_cache(maxsize=None), as that doesn't involve locking and is comparable to
memoize.
+-------+---------+-----------------+----------+----------+ | | memoize | lru_cache (py3) | backport | no-stats | +-------+---------+-----------------+----------+----------+ | write | 0.970 | 2.373 | 2.714 | 2.413 | | read | 0.342 | 0.830 | 1.053 | 0.950 | +-------+---------+-----------------+----------+----------+
comment:11 Changed 4 years ago by
The pull request looks good, I left a comment, you can mark the ticket as RFC once you've addressed it.
comment:12 Changed 4 years ago by
For ticket completeness bmispelon also left a comment about
memoize(..., num_args=1) whilst
get_callable takes 2 arguments:
bmispelon 16 hours ago:
The num_args argument was introduced in b6812f2d9d1bea3bdc9c5f9a747c5c71c764fa0f specifically for get_callable (note how it can take two arguments but the num_args is 1).
I suspect there might be something weird going on but I haven't had time to look into it and unfortunately, we cannot ask the original committer of that change anymore :(
Say a function has two parameters and one result, then
A,B -> X and
A,C -> Y. However as the function's cache strategy only accounts for the first argument, it will always do the same, namely
A,? -> X UNLESS in the case of an exception. Now when the exception happened for
A,D -> ?, then no cache is written and
A,C -> Y. However if
A,C -> Y is executed FIRST and cached, then
A,? -> Y and thus
A,D -> Y -- no exception being raised.
I have added a test to the PR demonstrate this problem.
comment:13 Changed 4 years ago by
There's still some issues with the pull request:
- shai raised concerns about performance. Your benchmark has no units but I assume it's a "smaller is better" type which means that the
lru_cacheis less performant than the old
memoize. Can you address that (and maybe also show the code of your benchmark)?
- The current pull request is not python3 compatible (because of the use of
xrangein one of the tests)
- I don't think the test you added to show the issue of
num_argsbelongs in this pull request. While I agree that there's probably a bug there, it should either be opened as a separate ticket or just documented since we're deprecating
memoizealtogether.
Thanks
Changed 4 years ago by
memoize num_args implication for get_callable
Changed 4 years ago by
comment:14 Changed 4 years ago by
I have attached a rewritten speedtest, which should show the performance impact on 1M reads and writes.
lru_cache is around 3 times slower on both, which should come as no surprise.
memoize has a very simplistic cache key generator as it copies positional arguments into a new list, while
lru_cache creates a hash of both positional and keyword arguments. This overhead comes at a reduced performance. Nevertheless looking at the performance difference of a single read (0.0000006s) and write (.0000014s), the impact seems rather limited.
Python: 2.7.5 (default, Sep 5 2013, 23:32:22) [GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] +-----------+----------+-----------+ | | memoize | lru_cache | +-----------+----------+-----------+ | 1M reads | 0.34065s | 1.07396s | | 1M writes | 1.01192s | 2.71383s | +-----------+----------+-----------+ Python: 3.3.2 (default, Nov 6 2013, 08:59:42) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.2.79)] +-----------+----------+-----------+ | | memoize | lru_cache | +-----------+----------+-----------+ | 1M reads | 0.30876s | 0.87845s | | 1M writes | 0.99767s | 2.39997s | +-----------+----------+-----------+
The PR has been updated with the redundant tests removed and PY3 compatibility.
comment:15 Changed 4 years ago by
This results in around 6-8% slowdown in url_reverse djangobench benchmark. While not critical at all, avoiding that would be nice. URL reversing is already bottleneck when producing a large json list where each element contains a reversed url.
Similar slowdown is in url_resolve benchmark, too.
comment:16 Changed 4 years ago by
I am OK for making this change, using standard library is usually a good idea. The lru_cache definitely handles correctly some cases Django doesn't. If the slowdown is too big for any specific use case, maybe it would be better to use a custom tailored cache instead there.
What's the rationale for raising exceptions with illegal arguments? Maybe it's the right thing to do, but I'm skeptical. Here's my research on the topic.
|
https://code.djangoproject.com/ticket/21351?cversion=1&cnum_hist=7
|
CC-MAIN-2017-47
|
refinedweb
| 1,253
| 63.7
|
Hi everyone,
This is just heads-up about issues I discovered when trying to get Xalan 1.11 building on
a Solaris 10 system with SunStudio 12.3 using the -library=stdcxx4 option and which changes
I needed to make to get things compiling... Maybe I should post this in the Xalan wiki but
I need to do some other things now...
Download xalan 1.11 (src.tar.gz) from the official site (xalan.apache.org/xalan-c/<>),
verify it, and copy it to the directory where you want to build it on the Solaris.
Ensure that GNU tar is used.
Ensure that GNU make is used.
[The following export probably isn't necessary to build Xalan, but since it was set when Xerces
was built, it seems handy to also define it when Xalan is built (to see if defining it causes
issues during the Xalan build process) ...]
export CXXFLAGS="-library=stdcxx4 -m32"
tar xvf xalan_c-1.11-src.tar.gz >out.extractFromArchive.txt 2>&1 &
cd *11/c
export XERCESCROOT=<root-of-directory-hierarchy Xalan was installed to>
export XALANCROOT=`pwd`
./runConfigure -p solaris -c cc -x CC -l -library=stdcxx4 -b 32 -P /home/prod/usd72/app/xalan/1.11.stdcxx4
>out.runconfig 2>&1 &
Change src/xalanc/Include/SolarisDefinitions.hpp as follows:
remove the #if/#else/#endif around the XALAN_HAS_STD_ITERATORS/XALAN_HAS_STD_DISTANCE lines.
make >out.make 2>&1 &
Verify that the output-containing files don't contain unexpected error messages and such and
that the expected library files are present in the supplied Xalan installation/lib dir.
To make sure that xalan and xerces seem to be working correctly, do the following :
cd samples
./runConfigure -p solaris -c cc -x CC -b 32 -l -library=stdcxx4 -z -library=stdcxx4
make
Verify that the SimpleTransform binary was built correctly by verifying that its' name shows
up when you ls -l ../bin
cd SimpleTransform/
cp ../../bin/SimpleTransform .
Add the new Xerces and Xalan lib directories to the system-wide library-path,
eg by export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/app/xerces/3.1.1.stdcxx4/lib:$HOME/app/xalan/1.11.stdcxx4/lib
Verify that the file foo.out is not present in the current directory (if it is, delete or
rename it).
Then run the SimpleTransform executable, eg as follows: ./SimpleTransform
Verify that the foo.out file has been created and that it contains the expected results of
applying the contents of foo.xsl to the contents of foo.xml .
[cid:image001.png@01CDC419.87FF1480]<>
Martin Elzen
Developer
E
:
Martin.Elzen@usoft.com<mailto:Martin.Elzen@usoft.com>
T
:
+31-(0)35-699 09 18<tel:+31356990918>
W
:<>
USoft B.V. | Amalialaan 126E<>
| 3743 KJ Baarn | The Netherlands
[cid:image002.png@01CDC419.87FF1480]<.>[cid:image003.png@01CDC419.87FF1480]<>[cid:image004.png@01CDC419.87FF1480]<>
________________________________
and then delete it from your system.
USoft is neither liable for the proper and complete transmission of the information contained
in this communication nor for any delay in its receipt.
|
http://mail-archives.apache.org/mod_mbox/xalan-dev/201211.mbox/%3CCBAC34FA9B9B94438B06C8A6557D7AB739ADF1CB7B@nlbrnex11%3E
|
CC-MAIN-2013-48
|
refinedweb
| 495
| 50.33
|
Prior to the holidays, my colleague Nick wrote an awesome post on getting better at functional programming by stepping out of your comfort zone, and burning the boats upon the shores of strange new languages. If you did find yourself conquering the lands of Erlang, Elm, Haskell, or the isles of Akka/Scala, my hat’s off to you.
This time, I’d like like to bring the battle a little closer to home, and show you how you can use higher-order functions to clean up one of the more prominent battlegrounds in JavaScript: Node library callbacks and framework (Express, etc...) routes.
Let’s assume, for a moment, that we’re dealing with a very simple path that just returns a statically-chosen file.
import { readFile } from "fs"; router.get("/some-file", function (req, res) { const url = "./data/some-file.ext"; readFile(url, function (err, file) { if (err) { res.writeHead(404); res.end(); } else { res.writeHead(200, { "Content-Type": "application/octet-stream" }); res.write(file); res.end(); } }); });
That's not a terrible amount of code to have to deal with, and it’s not as bad as it could be.
There are dozens (perhaps hundreds) of libraries we could lean on to write less code. But if you look at it, there really isn’t a lot of code there to begin with. What we do have is nesting, with the promise of more nesting, and that immediately makes certain things more difficult.
I’m typically a fan of Uncle Bob, and his suggestion might be to separate the code out, along the lines of intent. Some of this code is handling the negotiation of the request, and some of this code is about handling strategies.
This might be a little preemptive, but code in JS ought to be small and readable regardless.
router.get("/some-file", function (req, res) { const url = "./data/some-file.ext"; function handleError (err) { res.writeHead(404); res.end(); } function handleSuccess (file) { res.writeHead(200, { "Content-Type": "application/octet-stream" }); res.write(file); res.end(); } readFile(url, function (err, file) { if (err) { handleError(err); } else { handleSuccess(file); } }); });
We’ve gone about as far as we can or should go for now. And we’ve pulled those behaviours out of the logical flow. Perhaps this is preemptive, and there are good arguments for leaving big goopy messes alone until they can be refactored to be made of reusable behaviours (rather than just breaking an algorithm into sequential phases; don’t make Uncle Bob cry). But, the goal is to prevent it from happening in the first place.
To that end, we can see that this pattern will quickly become boilerplate.
function handleError (err) { /* ... */ } function handleSuccess (x) { /* ... */ } function nodeCallback (err, thingIWant) { if (err) { handleError(err); } else { handleSuccess(thingIWant); } } someNodeFunction(arg, nodeCallback);
It’s way less than what you might have in a language like Java, but it’s there, nonetheless. Also, all three functions are still cursed with the need to be kept inside of the route.
// PROBLEM 1: // This will blow up, because handleSuccess / handleError don't have access to `res` function handleSuccess (data) { res.writeHead(200, { "Content-Type": "application/octet-stream" }); // *BOOM* res.write(data); res.end(); } function handleError (err) { /* ... */ } // *ALSO BOOM* router.get(path, function (req, res) { readFile(filePath, function (err, file) { if (err) { handleError(err); } else { handleSuccess(file); } }); }); // PROBLEM 2: // This will blow up, because nodeCallback has no access to handleError or handleSuccess function nodeCallback (err, data) { if (err) { handleError(err); // *BOOM* } else { handleSuccess(data); // *ALSO BOOM* } } router.get(path, function (req, res) { function handleSuccess (file) { /* ... */ } function handleError (err) { /* ... */ } readFile(filePath, nodeCallback); });
Because they’re so tightly tied together, this pattern is suitable for one-offs, but it’s still not really unit-testable or reusable. This is bleak, because even if our code gets a little clearer, we still have a lot of stuff buried inside of that route.
But we have a secret weapon in JS, and functional languages in general. We’ve been hurt by it thus far, but it’s a tool we’ve simply misunderstood.
Higher Order Functions
A higher order function is simply a function which takes a function as an argument to be used, or is a function returned as a result (or both).
In most languages, even if you could return a function from a function, it wouldn't do you much good. But in most functional languages, thanks to the concept of Lexical Scoping, we have access to Closure.
You’ve been using it all along, in the router callback, and in the Node
readFile callback.
We can take this to the next level, to make this code cleaner and easier to test. First, we need to make sure we understand what we’re really dealing with, and our desired outcome.
function rememberX (x) { return function getX () { return x; }; } // ES6 version might look like the following // const rememberX = x => () => x; const get42 = rememberX(42); const get36 = rememberX(36); const get18 = rememberX(18);
Let that marinate for a second. We’re passing a value of
x and we’re getting a function back on the other side.
But, why would we want to do that?
let x = 96; get42(); // 42 get36(); // 36 x = 12; get18(); // 18 x; // 12
Note that the values of
x have all been protected from the changing value of
x on the outside. That’s because the
x which is referenced is the one that is visible at the time the function is created, and not at the time the function is called. So by passing that newly created function back to the outside world, we remember the value which was passed into the outer function as an argument.
function multiply (multiplier) { return function (x) { return x * multiplier; }; } // ES6: // const multiply = x => y => x * y; const triple = multiply(3); const quadruple = multiply(4); const numbers = [1, 2, 3]; numbers.map(triple); // [3, 6, 9] numbers.map(quadruple); // [4, 8, 12] numbers.map(triple).map(quadruple); // [12, 48, 108]
There’s a lot of power in using these types of functions in order to preserve references to configuration, but pass the function around to be used in multiple places.
Knowing this, we should be able to find a way to build a function which holds some information, and returns a preconfigured function ready to be used by something else.
function higherOrderNodeCallback (onSuccess, onError) { function nodeCallback (err, thingIWant) { if (err) { onError(err); } else { onSuccess(thingIWant); } } return nodeCallback; }
If you look at the function that gets returned, it should be clear as to what it is doing. That is, it’s doing exactly what the
nodeCallback was doing, previously.
The function on the outside, however, is taking in a success callback and an error callback.
It’s configuring that instance of the inner function to refer to those things, so that when the returned function is called, it can still access them, the same way
rememberX worked.
router.get(url, function (req, res) { function handleSuccess (file) { res.writeHead(200, { "Content-Type": "application/octet-stream" }); res.write(file); res.end(); } function handleError (err) { res.writeHead(404); res.end(); } const callback = higherOrderNodeCallback(handleSuccess, handleError); readFile(filePath, callback); });
We’ve successfully removed the Node boilerplate, by putting it in a configurable higher-order function. If we wanted to take that even further, we could create similar configurations for the success and error handlers.
function closeWithStatus (response, status, headers) { const data = headers || {}; function end () { response.writeHead(status, data); response.end(); } return end; } function closeWithContent (response, status, headers) { const data = headers || {}; function endWith (content) { response.writeHead(status, data); response.write(content); response.end(); } return endWith; } function higherOrderNodeCallback (onSuccess, onError) { return function nodeCallback (err, content) { if (err) { onError(err); } else { onSuccess(content); } }; } router.get(url, function (req, res) { const headers = { "Content-Type": "application/octet-stream" }; const onSuccess = closeWithContent(res, 200, headers); const onError = closeWithStatus(res, 404); const callback = higherOrderNodeCallback(onSuccess, onError); readFile(filePath, callback); });
Our router logic is super clean, and it should be pretty clear what everything is doing, now that we have a grasp of the how and why.
If your team was comfortable with functional programming, perhaps just this, instead:
readFile( filePath, higherOrderNodeCallback( closeWithContent(res, 200, headers), closeWithStatus(res, 404) ) );
You don’t have to write your JavaScript like a LISP, of course; but if it works for your team...
This pattern has some fun properties. By assuming nothing of the outside world, except what it’s given, we’ve created code that is happily:
- Testable
function passTest () { /* ... */ } function failTest () { /* ... */ } const errorCallback = higherOrderNodeCallback(failTest, passTest); errorCallback(errorObj, null); const successCallback = higherOrderNodeCallback(passTest, failTest); successCallback(null, successObj);
- Reusable
export const doSomething = higherOrderNodeCallback(doA, doB); import doSomething from "./do-something"; fstat("./some-file", doSomething);
- nearly SOLID, already.
Yes, okay. Uncle Bob never meant for his principles to apply to functional programming, and it almost feels like cheating here, because getting most of the way there was so easy... and we aren't using inheritance, so getting the rest of the way there is wholly unnecessary.
That’s not the end, of course. You can happily use your composed functions inside of other higher-order functions.
function runAsPromised (task, ...params) { return new Promise((resolve, reject) => task(...params, higherOrderNodeCallback(resolve, reject)) ); } runAsPromised(readFile, filePath) .then(convertToContent) .then(JSON.stringify) .then(closeWithContent(res, 200, jsonHeaders)) .catch(closeWithStatus(res, 404));
“But why would I want to use this technique, when there are dozens of libraries that can do this for me?”
The reality is that those libraries consistently make code cleaner and easier to write. If you have them and they make life simpler, then use them. That said, if there was a library to solve every problem, and a library to tie all libraries together, to produce business value, then we would likely not have jobs.
This pattern, however, can be applied at any level, and is particularly good at separating code that provides value from code that keeps the system running.
const displayItems = storeItems .filter(filterBy(systemCriteria)) .filter(filterBy(customerCriteria)) .sort(sortBy(primaryDimension, secondaryDimension));
There are even techniques for building these configurable functions automatically; “Partial Application” and “Currying”. But that’s a talk for another day.
Learn More
If you're looking for more in-depth tutorials and learning opportunities, check out our various JavaScript training options. Better yet, inquire about custom training to ramp up your knowledge of the fundamentals and best practices with custom course material designed and delivered to address your immediate needs.
|
https://rangle.io/blog/refactoring-node-with-higher-order-functions/
|
CC-MAIN-2019-22
|
refinedweb
| 1,736
| 54.83
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.