text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
FoundationDB (FDB) is an ACID-compliant, multi-model, distributed database. The software started out life almost ten years ago. In March of 2015, Apple acquired the company behind FDB and in 2018, they open sourced the software under an Apache 2.0 license. VMWare's WaveFront runs an FDB cluster with at least a petabyte of capacity, Snowflake runs FDB for their metadata storage for their Cloud database service and Apple uses FDB for their CloudKit backend.
FDB uses the concept of Layers to add functionality. There are layers for MongoDB API-compatible document storage, record-oriented storage and SQL support among others.
FDB is optimised for SSDs to the point that you need to decide between HDD and SSD-specific configurations when setting up a database. The clustering support allows for scaling both up and down with data being automatically rebalanced. FDB utilises SQLite for its underlying storage engine.
FDB itself is written in Flow, a programming language the engineers behind FDB developed. The language adds actor-based concurrency as well as new keywords and control-flow primitives to C++11. As of this writing FDB is comprised of 100K lines of Flow / C++11 code and a further 83K lines of C code.
In this post I'll take a look at setting up a FoundationDB cluster and running a simple leaderboard example using Python. The leaderboard code used in this post originated in this forum post.
A FoundationDB Cluster, Up & Running
I've put together a cluster of three m5d.xlarge on AWS EC2. These instance types come with 4 vCPUs, 15 GB of RAM, 150 GB of NVMe SSD storage and up to 10 GBit/s of networking connectivity. The three instances cost of $0.75 / hour to run.
On all three instances I'll first format the NVMe partition using the XFS file system. This file system was first created by Silicon Graphics in 1993 and has excellent performance characteristics when run on SSDs.
$ sudo mkfs -t xfs /dev/nvme1n1 $ sudo mkdir -p /var/lib/foundationdb/data $ sudo mount /dev/nvme1n1 /var/lib/foundationdb/data
I'll then install some prerequisites for the Python code in this post.
$ sudo apt update $ sudo apt install \ python-dev \ python-pip \ virtualenv
On the first server I'll create a virtual environment and install the FoundationDB and Pandas python packages.
$ virtualenv ~/.fdb $ source ~/.fdb/bin/activate $ pip install foundationdb pandas
FoundationDB's server package depends on the client package being installed beforehand so I'll download and install that first. The following was run on all three instances.
$ wget -c $ sudo dpkg -i foundationdb-clients_6.0.18-1_amd64.deb
The following will install the server package and was run on all three instances as well.
$ wget -c $ sudo dpkg -i foundationdb-server_6.0.18-1_amd64.deb
I'll run a command to configure the first server to switch binding from the local network interface to the private network instead. This way it'll be reachable by the other two servers without being available to the wider internet.
$ sudo /usr/lib/foundationdb/make_public.py
/etc/foundationdb/fdb.cluster is now using address 172.30.2.218
I'll then take the contents from /etc/foundationdb/fdb.cluster on the first server and place them in the same file on the other two servers.
With the cluster configuration synced between all three machines I'll restart FDB on each of the systems.
$ sudo service foundationdb restart
I'll then configure FDB for SSD storage, triple replication and set all three instances up as coordinators.
$ fdbcli
configure triple ssd coordinators auto
This is the resulting status after those changes.
status details
Using cluster file `/etc/foundationdb/fdb.cluster'. Configuration: Redundancy mode - triple Storage engine - ssd-2 Coordinators - 3 Cluster: FoundationDB processes - 3 Machines - 3 Memory availability - 15.1 GB per process on machine with least available Fault Tolerance - 0 machines (1 without data loss) Server time - 02/18/19 08:53:13 Data: Replication health - Healthy (Rebalancing) Moving data - 0.000 GB Sum of key-value sizes - 0 MB Disk space used - 0 MB Operating space: Storage server - 142.4 GB free on most full server Log server - 142.4 GB free on most full server Workload: Read rate - 14 Hz Write rate - 0 Hz Transactions started - 9 Hz Transactions committed - 1 Hz Conflict rate - 0 Hz Backup and DR: Running backups - 0 Running DRs - 0 Process performance details: 172.30.2.4:4500 ( 1% cpu; 0% machine; 0.000 Gbps; 0% disk IO; 0.3 GB / 15.1 GB RAM ) 172.30.2.137:4500 ( 1% cpu; 0% machine; 0.000 Gbps; 0% disk IO; 0.4 GB / 15.2 GB RAM ) 172.30.2.218:4500 ( 1% cpu; 0% machine; 0.026 Gbps; 0% disk IO; 0.4 GB / 15.1 GB RAM ) Coordination servers: 172.30.2.4:4500 (reachable) 172.30.2.137:4500 (reachable) 172.30.2.218:4500 (reachable)
A FoundationDB & Python Leaderboard
I'll use Pandas to read CSV data in chunks out of a GZIP-compressed CSV file. This file contains 20 million NYC taxi trips conducted in 2009. I'll feed the originating neighbourhood(s) and final total taxi fare into FDB.
$ python
from datetime import datetime import fdb from fdb.tuple import pack, unpack import pandas as pd fdb.api_version(510) @fdb.transactional def get_score(tr, user): user_key = pack(('scores', user)) score = tr.get(user_key) if score == None: score = 0 tr.set(user_key, pack((score,))) tr.set(pack(('leaderboard', score, user)), b'') else: score = unpack(score)[0] return score @fdb.transactional def add(tr, user, increment=1): score = get_score(tr, user) total = score + increment user_key = pack(('scores', user)) tr.set(user_key, pack((total,))) tr.clear(pack(('leaderboard', score, user))) tr.set(pack(('leaderboard', total, user)), b'') return total cols = ['] db = fdb.open() counter, start = 0, datetime.utcnow() for chunk in pd.read_csv('trips_xaa.csv.gz', header=None, chunksize=10000, names=cols, usecols=['total_amount', 'pickup_ntaname']): for x in range(0, len(chunk)): add(db, chunk.iloc[x].pickup_ntaname, chunk.iloc[x].total_amount) counter = counter + 1 print (counter * 10000) / (datetime.utcnow() - start).total_seconds()
The above imported at a rate of 495 records per second. While the import was taking place I was able to begin querying the leaderboard.
from operator import itemgetter import fdb from fdb.tuple import pack, unpack fdb.api_version(510) @fdb.transactional def top(tr, count=3): out = dict() iterator = tr.get_range_startswith(pack(('leaderboard',)), reverse=True) for key, _ in iterator: _, score, user = unpack(key) if score in out.keys(): out[score].append(user) elif len(out.keys()) == count: break else: out[score] = [user] return dict(sorted(out.items(), key=itemgetter(0), reverse=True)) top(db)
This is the top three pick up points by total cab fare after a few minutes of importing the CSV file.
{75159.25000000016: ['Hudson Yards-Chelsea-Flatiron-Union Square'], 47637.469999999936: ['SoHo-TriBeCa-Civic Center-Little Italy'], 134147.24000000008: ['Midtown-Midtown South']} | https://tech.marksblogg.com/minimalist-guide-tutorial-foundationdb.html | CC-MAIN-2021-31 | refinedweb | 1,159 | 60.01 |
October 23, 2019Michael Griffiths
Having fresh and customized content goes a long way in keeping customers engaged with your skill. This means you will need to not only include data in your skill, but regularly update it as well. There are a number of ways you can include data in your skill—from including it directly in the code to calling an external service for it, to storing it in a database. The best option will clearly depend on the use case and the data. However, for use cases like a lookup table or a single table query, Amazon S3 has an inexpensive and simple option called S3 Select. This service treats a file as a relational database table where read-only queries can retrieve data.
S3 Select allows you to treat individual files (or objects) stored in an S3 bucket as relational database tables, where you can issue SQL commands like “SELECT column1 FROM S3Object WHERE column2 > 0” against a single file at a time to retrieve data from that file. By using this feature, your skill can query data without having to embed information in your code, set up or access an external service, or manage a database. For example, if your skill needs to access information only about New York City from a list of national events, or your skill needs to look up translations of different descriptions based on the user’s locale, your skill can query for just the needed data from files in S3.
Using S3 Select also allows you to easily update the content. Simply upload a new version of the file to your S3 bucket. Data files can be in CSV, JSON, or Apache Parquet format.
Note: If you have non-technical staff updating your content, consider using CSV files since they can be edited by popular spreadsheet programs.
Additionally, with S3 Select, no provisioning is required. Upload the file and you’re ready to go. When there’s a surge in your skill usage, you do not need to make any adjustments. Each request counts just like any other GET request to S3. If you are eligible for the AWS Free Tier, these requests to the S3 bucket will count towards your usage. Another advantage to this approach is that your Lambda function doesn’t have to be scaled up to a higher memory setting since it only has to process the needed data (not the entire file), and the entire file doesn’t need to be loaded into memory.
Next, we’ll walk through the code to use S3 Select for both Python and Node.js skills. In these examples, we are selecting a time zone for a given zip code from a CSV formatted file. This example would be useful if you wanted to look up the time in another city.
Note: The Alexa Settings API is a great way to easily get the time zone for the device communicating with your skill.
To add S3 Select to your Python skill, you first need to ensure the AWS SDK for Python (boto3) is imported. The package is automatically included in all AWS-provided Lambda runtimes, so you won’t need to add it to your requirements file. Next, create an S3 client object:
import boto3 s3 = boto3.client('s3')
At the point in your code where you want to select data, add the following block, modifying the Bucket, Key, Expression, and other attributes to match your situation.
try: response = s3.select_object_content( Bucket='bucket-name', Key='datafiles/zipcode_lookup.csv', Expression="SELECT timezone from S3Object s WHERE s.zip ='98101' LIMIT 1", ExpressionType='SQL', InputSerialization={ 'CSV': { 'FileHeaderInfo': 'USE' }, 'CompressionType': 'NONE' }, OutputSerialization={ 'CSV': { } }, RequestProgress={ 'Enabled': False } ) data = "" event_stream = response['Payload'] end_event_received = False for event in event_stream: # capture the data from the records event, if 'Records' in event: data += event['Records']['Payload'] elif 'Progress' in event: print(event['Progress']['Details']) elif 'End' in event: print('Result is complete') end_event_received = True if not end_event_received: raise RuntimeError("End event not received, request incomplete.") except Exception as e: print(e) raise e
From the code, you can see the S3 Select returns an event stream. The “for” loop then processes each event looking for the records events. That’s it! Your skill now has the ability to read from a CSV file like it’s a database.
To add S3 Select to your Node.js skill, first you need to require the AWS SDK for Node.js in your skill. It is automatically included in the Node.js Lambda runtimes, so you don’t need to add it to your package.json. Next, create an S3 client object.
Note: The example below is set up to work with an Alexa-hosted skill, but it can also work in any Lambda function.
const aws = require('aws-sdk'); const s3 = new aws.S3();
Add the following block to set up querying for the data, modifying the bucketName, keyName, query, and other attributes to match your situation.
const lookupTimezone = (zipCode) => { return new Promise((resolve, reject) => { try { const bucketName = process.env.S3_PERSISTENCE_BUCKET; const keyName = 'zipcode.csv'; const query = `SELECT timezone from S3Object s WHERE s.zip ='${zipCode}' LIMIT 1`; let returnVal = 0; const params = { Bucket: bucketName, Key: keyName, ExpressionType: 'SQL', Expression: query, InputSerialization: { CSV: { FileHeaderInfo: 'USE', }, CompressionType: 'NONE', }, OutputSerialization: { CSV: { }, } }; console.log('start select'); s3.selectObjectContent(params, (err, data) => { if (err) { reject(0); } const eventStream = data.Payload; eventStream.on('data', (event) => { if (event.Records) { returnVal = event.Records.Payload.toString(); resolve(returnVal); } else if (event.Stats) { //console.log(`Processed ${event.Stats.Details.BytesProcessed} bytes`); } else if (event.End) { //console.log('SelectObjectContent completed'); } }); // Handle errors encountered during the API call eventStream.on('error', (err) => { switch (err.name) { // Check against specific error codes that need custom handling } }); eventStream.on('end', () => { // Finished receiving events from S3 console.log(`returning: ${returnVal}`); resolve(returnVal); }); }); } catch (e) { console.log(e); reject(0); } }) };
From the code, you can see the S3 Select returns an event stream. Given the asynchronous nature of Node.js, this is wrapped in a promise so all the events can be processed before the requesting code resumes. (If you are new to promises, check out this blog post about requesting data from an external API.) The nested function that processes each event watches for the data event. At the point in your code where you want to query the data, call it using this code:
const zip = await lookupTimezone('98101');
That’s it! Your skill now has the ability to read from a CSV file like it is a database.
The S3 Select feature is best suited for read-heavy scenarios. If your use case includes data specific to a user, check out persistent attributes. Persistent attributes store data in S3 or Amazon DynamoDB using the customers’ userId as the primary key. If your use case includes occasional writes to the S3 file, and having a slight delay in making those updates is acceptable, consider queueing those updates in an Amazon SQS queue, and having a separate Lambda function to make the updates. However, if you need to routinely both read and write data as part of your skill, the eventual consistency model of S3 for updates to an existing object might not be best fit. You may be better served with using Amazon DynamoDB or Amazon Aurora Serverless instead.
Using Amazon S3 like a relational database provides an easy way to include fresh and dynamic content in your skill. Content can updated by non-technical staff, and easily searched for and retrieved by your skill code—so you can provide an engaging experience for your customers. We’re excited to see how you improve your customer experience with these tips!
Amazon S3 Select Documentation
Blog: Making HTTP Requests to Get Data from an External API using the ASK SDK for Node.js V2
Boto3 SDK Documentation for S3
Node.js SDK Documentation for S3 | https://developer.amazon.com/blogs/alexa/post/68ddfc29-f407-48ec-979b-e94e57d3b3fc/query-amazon-s3-like-a-relational-database-to-provide-fresh-skill-content | CC-MAIN-2021-04 | refinedweb | 1,311 | 55.34 |
DIY: kernel panic OS X and iOS in 10 LOC
After receiving quite a few reports from users of of kernel panic upon attaching the second time to a process, I finally got around to debugging the kernel to figure out what was going on.
First, a little background. Frida hooks function calls by rewriting the function’s prologue in memory. In order to do so it has to make the containing memory page writable, patch the code, and later revert it. As shared libraries are mapped and not copied into memory, the kernel can share their memory pages between processes. Those memory pages are copy-on-write, and any local modification will simply give your process its own copy of the memory page in question. Whenever Frida intercepts a function in a shared library, this side-effect occurs. Upon attaching to a process, Frida itself hooks one such function, and as a user of Frida you may be hooking plenty of them as well. Also, every time Frida attaches to a process it probes portions of its address space, which also means parsing the metadata of its loaded shared libraries. This parsing ends up reading some of those same memory pages.
Having been increasingly frustrated by this looming kernel panic but never finding a big enough chunk of time to investigate it properly, an opportunity finally presented itself. I fired up /bin/cat as a guinea pig program in one terminal, and attached to it with Frida once, then detached. Next, with /bin/cat still running, I ran vmmap and copied its output to my debugger machine. Next I requested Frida to attach a second time, and the kernel panic triggered as usual. The machine was now waiting for a debugger to attach, so I fired up lldb and attached to it. A quick look at the call-stack revealed that it was hitting a failing assertion while handling mach_vm_read_overwrite. By looking at the arguments it was clear where it was requested to read from, and how many bytes. Looking back at the vmmap output, I noticed something peculiar. It was asked to read the first pages of a shared library, and unlike all the other libraries, and all other pages of this library, the second page was marked PRV (private) and not COW (copy-on-write). This made perfect sense, because I knew Frida hooked one function in this particular library. “Could it be a bug when handling a read spanning COW and PRV pages?” I quickly wrote a tiny C program to test out this theory, and yep, that was the issue. After simplifying it further I arrived at this:
#include <unistd.h>
#include <mach/mach.h>
#include <mach-o/dyld.h>
#ifndef __LP64__
# define mach_vm_protect vm_protect
# define mach_vm_read_overwrite vm_read_overwrite
#endif
extern kern_return_t mach_vm_protect (vm_map_t,
mach_vm_address_t, mach_vm_size_t, boolean_t, vm_prot_t);
extern kern_return_t mach_vm_read_overwrite (vm_map_t,
mach_vm_address_t, mach_vm_size_t, mach_vm_address_t,
mach_vm_size_t *);
int
main (int argc, char * argv[])
{
volatile char * library;
const mach_vm_size_t page_size = getpagesize ();
const mach_vm_size_t buffer_size = 3 * page_size;
char buffer[buffer_size];
mach_vm_size_t result_size;
library = (char *) _dyld_get_image_header (2);
mach_vm_protect (mach_task_self (),
(mach_vm_address_t) (library + page_size), page_size, FALSE,
VM_PROT_READ | VM_PROT_WRITE | VM_PROT_COPY);
library[page_size]++; /* COW -> PRV transition */
library[page_size]--; /* undo dummy-modification */
result_size = 0;
/* panic! */
mach_vm_read_overwrite (mach_task_self (),
(mach_vm_address_t) library, buffer_size,
(mach_vm_address_t) buffer, &result_size);
return 0;
}
Compile and run, and observe an instant kernel panic on the latest OS X and iOS. Update: Incorporated improvements from.
Latest Frida from git now has a workaround where we limit our reads to one page at a time. This will be part of the upcoming 1.6.9 release, to be released soon.
Note: I reported this to Apple on the 20th of February 2015, though my impression from past events is that they’re not likely to fix this anytime soon. | https://medium.com/@oleavr/diy-kernel-panic-os-x-and-ios-in-10-loc-c250d9649159 | CC-MAIN-2017-47 | refinedweb | 623 | 60.85 |
28 April 2011 18:06 [Source: ICIS news]
By Joe Kamalick
WASHINGTON (ICIS)--The ?xml:namespace>
In addition, according to the latest housing sector outlook, the home building industry still faces challenges that could delay or slow a comeback.
David Crowe, chief economist at the National Association of Home Builders (NAHB), expects that a turnaround in new home demand and construction will emerge in the second half of this year and continue into 2012 and beyond.
But it will be a long, slow pull. $16,000 (€10,880) worth of chemicals and derivatives used in the structure or in production of component materials.
In NAHB’s semi-annual housing forecast and outlook, Crowe said that positive indicators include historically low home mortgage interest rates, now just under 5%, that he does not think will increase substantially before the end of 2012.
As home prices remain at very low levels - and indeed as residential prices continue to fall - housing affordability remains very favourable, he said. He noted that 74% of recently sold homes were affordable for US families at the median income level. young adult or the parent,” he said, “and we will see housing demand growth develop from that.”
When young adults leave their childhood homes, finish college or otherwise move into the workforce, those departures generate what demographers call “household formations”. Those young adults typically would rent apartments or homes and eventually buy a residence, driving demand for multi-unit and single-family housing.
But because of the recession, that normal flow of household formations was blocked.
Crowe says that based on US population growth in recent years, there may be as many as 2m household formations that should have developed but could not, and that backed-up demand for housing must at some time break forth.
However, Crowe conceded that the data on household formations is uncertain, and he also has worries about factors that could impede a housing recovery.
He noted that home builders continue to face difficulty in getting project development loans,.
That compares with annual housing starts that were near, at and even above 2m units from late 2003 through early 2006 before the bottom fell out with the sub-prime mortgage collapse and the broader
Longer term, he expects the market to return to a more or less normal level in later years, with an average annual pace of new home construction at 1.5m units - but never again to reach the 2m-plus annual rate seen in the boom years.
Mark Zandi, chief economist at Moody’s Analytics, told the NAHB outlook conference that he was not quite so optimistic.
He said he sees the foreclosure crisis continuing, and the ongoing cascade of foreclosed or short-sale homes onto the housing market could well drive prices still lower.
Indeed, earlier this week the Standard & Poors (S&P) home price index showed housing prices in decline in February for the seventh straight month, with S&P warning that the home construction industry was within “a hair’s breadth” of a double-dip recession.
Zandi cautioned that the real estate pricing decline could develop into a vicious, downward spiral.
As home prices continue to fall, more and more homeowners become “under water” on their mortgage loans, meaning that they owe more on the note than the residence is worth.
That in turn leads to more loan defaults and more foreclosed and short-sale homes being dumped onto the market, depressing property values still further, putting more owners under water, and so on and so on.
“As long as house prices are falling, it is a good reason to be nervous,” Zandi said.
At best, he thinks it will take 12-18 months for the flood of foreclosed residential properties to be worked through the market.
Zandi agrees that there is an unknown backlog in household formations, and those would-be home buyers will in time move into the market.
But he also anticipates two key societal and policy developments that potentially could change the
“I think we have begun a long-running shift away from single-family home ownership to renting, in part because the single-family home as an investment has not done well in recent years,” he noted.
“Younger householders don’t think about a house as our parents did or as we did, as an asset that will inevitably appreciate,” he said, so many of the 20-30 year-olds who ordinarily might be prospective home buyers will not become owners.
Second, Zandi thinks that sooner or later there will be a shift in government policies on mortgage financing requirements and perhaps tax benefits for home ownership that will make home buying even less attractive as a personal investment.
Federal financial policies likely will require lenders to retain higher levels of capital, which will precipitate higher mortgage interest rates and higher qualification criteria for borrowers, including much higher down-payments - putting home ownership well beyond the reach of many who otherwise might be willing buyers.
The heyday of the
( | http://www.icis.com/Articles/2011/04/28/9455655/insight-high-volume-us-housing-sector-may-be-gone-for-ever.html | CC-MAIN-2015-11 | refinedweb | 837 | 53.44 |
Homework 2
Due by 11:59pm on Tuesday, 9
Vitamins are straightforward questions that are directly related to lecture examples. Watch lecture, and you should be able to solve them. Please remember that vitamins should be completed alone. ***" # The identity function, defined using a lambda expression! identity = lambda k: k: Make Adder with a Lambda
Implement the
make_adder function from lecture using a single
return
statement that returns the value of a
lambda expression.
def make_adder(n): """Return a function that takes an argument K and returns N + K. >>> add_three = make_adder(3) >>> add_three(1) + add_three(2) 9 >>> make_adder(1)(2) 3 """ "*** YOUR CODE HERE ***" return lambda ________________
Use OK to test your code:
python3 ok -q make_adder | https://inst.eecs.berkeley.edu/~cs61a/fa16/hw/hw02/ | CC-MAIN-2021-49 | refinedweb | 117 | 55.54 |
On 11/24/2010 03:02 AM, Nico Sabbi wrote: > David. Hi Nico, Glad to hear you're around and thanks for responding! The Debian maintainer is actually using dvbstream_0.6+cvs20090621, which looks to me like the latest version -- I used cvs -z3 -d:pserver:anonymous at dvbtools.cvs.sourceforge.net:/cvsroot/dvbtools co -P dvbstream cvs tune.h still contains the block that restricts the DVB_API to an outdated version, #undef DVB_ATSC #if defined(DVB_API_VERSION_MINOR) #if DVB_API_VERSION == 3 && DVB_API_VERSION_MINOR >= 1 #define DVB_ATSC 1 #endif #endif and tune.c is not different from the Debian tune.c. Where is the working code for ATSC QAM tuning? I'd be happy to work with you if you can provide some pointers. Salve, Dave | http://www.linuxtv.org/pipermail/linux-dvb/2010-November/032790.html | CC-MAIN-2015-48 | refinedweb | 123 | 76.52 |
Storage Formats¶
Prerequisites
Outcomes
Understand that data can be saved in various formats
Know where to get help on file input and output
Know when to use csv, xlsx, feather, and sql formats
Data
Results for all NFL games between September 1920 to February 2017
# Uncomment following line to install on colab #! pip install
import pandas as pd import numpy as np
File Formats¶
Data can be saved in a variety of formats.
pandas understands how to write and read DataFrames to and from many of these formats.
We defer to the official documentation for a full description of how to interact with all the file formats, but will briefly discuss a few of them here.
CSV¶
What is it? CSVs store data as plain text (strings) where each row is a
line and columns are separated by
,.
Pros
Widely used (you should be familiar with it)
Plain text file (can open on any computer, “future proof”)
Can be read from and written to by most data software
Cons
Not the most efficient way to store or access
No formal standard, so there is room for user interpretation on how to handle edge cases (e.g. what to do about a data field that itself includes a comma)
When to use:
A great default option for most use cases
xlsx¶
What is it? xlsx is a binary file format used as Excel’s default.
Pros:
Standard format in many industries
Easy to share with colleagues that use Excel
Cons:
Quite slow to read/write large amounts of data
Stores both data and metadata like styling and display information and even plots. This metadata is not always portable to other file formats or programs.
When to use:
When sharing data with Excel
When you would like special formatting to be applied to the spreadsheet when viewed in Excel
Parquet¶
What is it? Parquet is a custom binary format designed for efficient reading and writing of data stored in columns.
Pros:
Very fast
Naturally understands all
dtypesused by pandas, including multi-index DataFrames
Very common in “big data” systems like Hadoop or Spark
Supports various compression algorithms
Cons:
Binary storage format that is not human-readable
When to use:
If you have “not small” amounts (> 100 MB) of unchanging data that you want to read quickly
If you want to store data in an size-and-time-efficient way that may be accessed by external systems
Feather¶
What is it? Feather is a custom binary format designed for efficient reading and writing of data stored in columns.
Pros:
Very fast – even faster than parquet
Naturally understands all
dtypesused by pandas
Cons:
Can only read and write from Python and a handful of other programming languages
New file format (introduced in March ‘16), so most files don’t come in this format
Only supports standard pandas index, so you need to
reset_indexbefore saving and then
set_indexafter loading
When to use:
Use as an alternative to Parquet if you need the absolute best read and write speeds for unchanging datasets
Only use when you will not need to access the data in a programming language or software outside of Python, R, and Julia
SQL¶
What is it? SQL is a language used to interact with relational databases… more info
Pros:
Well established industry standard for handling data
Much of the world’s data is in a SQL database somewhere
Cons:
Complicated: to have full control you need to learn another language (SQL)
When to use:
When reading from or writing to existing SQL databases
NOTE: We can cover interacting with SQL databases in a dedicated lecture – contact us for more information.
Writing DataFrames¶
Let’s now talk about saving a DataFrame to a file.
As a general rule of thumb, if we have a DataFrame
df and we would
like to save to save it as a file of type
FOO, then we would call
the method named
df.to_FOO(...).
We will show you how this can be done and try to highlight some of the items mentioned above.
But, we will not cover all possible options and features — we feel it is best to learn these as you need them by consulting the appropriate documentation.
First, we need some DataFrames to save. Let’s make them now.
Note that by default
df2 will be approximately 10 MB.
If you need to change this number, adjust the value of
the
wanted_mb variable below.
np.random.seed(42) # makes sure we get the same random numbers each time df1 = pd.DataFrame( np.random.randint(0, 100, size=(10, 4)), columns=["a", "b", "c", "d"] ) wanted_mb = 10 # CHANGE THIS LINE nrow = 100000 ncol = int(((wanted_mb * 1024**2) / 8) / nrow) df2 = pd.DataFrame( np.random.rand(nrow, ncol), columns=["x{}".format(i) for i in range(ncol)] ) print("df2.shape = ", df2.shape) print("df2 is approximately {} MB".format(df2.memory_usage().sum() / (1024**2)))
df2.shape = (100000, 13) df2 is approximately 9.9183349609375 MB
df.to_csv¶
Let’s start with
df.to_csv.
Without any additional arguments, the
df.to_csv function will return
a string containing the csv form of the DataFrame:
# notice the plain text format -- one row per line, columns separated by `'` print(df1.to_csv())
,a,b,c,d 0,51,92,14,71 1,60,20,82,86 2,74,74,87,99 3,23,2,21,52 4,1,87,29,37 5,1,63,59,20 6,32,75,57,21 7,88,48,90,58 8,41,91,59,79 9,14,61,61,46
If we do pass an argument, the first argument will be used as the file name.
df1.to_csv("df1.csv")
Run the cell below to verify that the file was created.
import os os.path.isfile("df1.csv")
True
Let’s see how long it takes to save
df2 to a file. (Because of the
%%time at
the top, Jupyter will report the total time to run all code in
the cell)
%%time df2.to_csv("df2.csv")
CPU times: user 1.7 s, sys: 51.5 ms, total: 1.76 s Wall time: 1.79 s
As we will see below, this isn’t as fastest file format we could choose.
df.to_excel¶
When saving a DataFrame to an Excel workbook, we can choose both the name of the workbook (file) and the name of the sheet within the file where the DataFrame should be written.
We do this by passing the workbook name as the first argument and the sheet name as the second argument as follows.
df1.to_excel("df1.xlsx", "df1")
pandas also gives us the option to write more than one DataFrame to a workbook.
To do this, we need to first construct an instance of
pd.ExcelWriter
and then pass that as the first argument to
df.to_excel.
Let’s see how this works.
with pd.ExcelWriter("df1.xlsx") as writer: df1.to_excel(writer, "df1") (df1 + 10).to_excel(writer, "df1 plus 10")
The
with ... as ... :
syntax used above is an example of a context manager.
We don’t need to understand all the details behind what this means (google it if you are curious).
For now, just recognize that particular syntax as the way to write multiple sheets to an Excel workbook.
Warning
Saving
df2 to an excel file takes a very long time.
For that reason, we will just show the code and hard-code the output we saw when we ran the code.
%%time df2.to_excel("df2.xlsx")
Wall time: 25.7 s
pyarrow.feather.write_feather¶
As noted above, the feather file format was developed for very efficient reading and writing between Python and your computer.
Support for this format is provided by a separate Python package called
pyarrow.
This package is not installed by default. To install it, copy/paste the code below into a code cell and execute.
!pip install pyarrow
The parameters for
pyarrow.feather.write_feather are the DataFrame and file name.
Let’s try it out.
import pyarrow.feather pyarrow.feather.write_feather(df1, "df1.feather")
%%time pyarrow.feather.write_feather(df2, "df2.feather")
CPU times: user 15.5 ms, sys: 20 ms, total: 35.5 ms Wall time: 25.1 ms
An example timing result:
As you can see, saving this DataFrame in the feather format was far faster than either CSV or Excel.
Reading Files into DataFrames¶
As with the
df.to_FOO family of methods, there are similar
pd.read_FOO functions. (Note: they are in defined pandas, not as
methods on a DataFrame.)
These methods have many more options because data storage can be messy or wrong.
We will explore these in more detail in a separate lecture.
For now, we just want to highlight the differences in how to read data from each of the file formats.
Let’s start by reading the files we just created to verify that they match the data we began with.
# notice that index was specified in the first (0th -- why?) column of the file df1_csv = pd.read_csv("df1.csv", index_col=0) df1_csv.head()
df1_xlsx = pd.read_excel("df1.xlsx", "df1", index_col=0) df1_xlsx.head()
# notice feather already knows what the index is df1_feather = pyarrow.feather.read_feather("df1.feather") df1_feather.head()
With the
pd.read_FOO family of functions, we can also read files
from places on the internet.
We saved our
df1 DataFrame to a file
and posted it online.
Below, we show an example of using
pd.read_csv to read this file.
df1_url = "" df1_web = pd.read_csv(df1_url, index_col=0) df1_web.head()
Practice¶
Now it’s your turn…
In the cell below, the variable
url contains a web address to a csv
file containing the result of all NFL games from September 1920 to
February 2017.
Your task is to do the following:
Use
pd.read_csvto read this file into a DataFrame named
nfl
Print the shape and column names of
nfl
Save the DataFrame to a file named
nfl.xlsx
Open the spreadsheet using Excel on your computer
If you finish quickly, do some basic analysis of the data. Try to do something interesting. If you get stuck, here are some suggestions for what to try:
Compute the average total points in each game (note, you will need to sum two of the columns to get total points).
Repeat the above calculation, but only for playoff games.
Compute the average score for your favorite team (you’ll need to consider when they were team1 vs team2).
Compute the ratio of “upsets” to total games played. An upset is defined as a team with a lower ELO winning the game.
url = "" url = url + "3488b7d0b46c5f6583679bc40fb3a42d729abd39/data/nfl_games.csv" # your code here --- create more cells if necessary | https://datascience.quantecon.org/pandas/storage_formats.html | CC-MAIN-2022-40 | refinedweb | 1,773 | 73.07 |
Scripting - Test Case Timing - Sleep to delay / wait
Hey guys I think i'm running across some transaction timing issues with my test cases which may be causing some data corruption at my service endpoint. I've been toying around with the folder level test case logic features but I can't seem to get what I want.
Is there a way to put a pause in top to bottom execution per test case or per test suite?
Thanks,
LeapTester
Is there a way to put a pause in top to bottom execution per test case or per test suite?
Thanks,
LeapTester
0
For each delay that you want to occur between a test, you'll need to add a method tool that has the following content:
from soaptest.api import SOAPUtil
def addDelay():
SOAPUtil.sleep(3000)
I've attached an example project that shows you how this can be done. For convenience, (if you have a lot of delays that need to be added), I've also shown in my example how to create a global tool so that the method tool needs to be created only once and then can be quickly added multiple times in your test suites. When you add the tool using the "right-click" menu, you'll choose "Existing" instead of "New Tool".
-Mike
Thx again,
-LT- | https://forums.parasoft.com/discussion/987/scripting-test-case-timing-sleep-to-delay-wait | CC-MAIN-2020-24 | refinedweb | 223 | 75.34 |
A while back I made a feature request that adds a way of scheduling code to be executed at a specific time. Some time later, Wix added jobs, which does essentially that.
Now, I also requested a way of creating jobs using code, an API. It would be something like the following example:
import wixJobs from 'wix-jobs'; wixJobs.insert( { "functionLocation": "/utils/dbUtils.deleteExpired", "description": "Delete the expired DB items", "executionConfig": { "time": "22:00", "dayOfWeek": "Sunday" } } );
And also, a way of removing a job, like the following example:
import wixJobs from 'wix-jobs'; wixJobs.remove("Delete the expired DB items");
In the previous example I used the description of a job to remove it, there could be an id or we could just use the item index.
Thanks. | https://www.wix.com/corvid/forum/feature-requests/create-a-job-using-code | CC-MAIN-2020-24 | refinedweb | 128 | 54.12 |
Each location step is made up of an axis, a node test, and zero or more predicates.
An axis indicates how to search for nodes. Here are the XPath 1.0 axes:
The child axis
The attribute axis
The ancestor axis
The ancestor-or-self axis
The descendant axis
The descendant-or-self axis
The following axis
The following-sibling axis
The namespace axis
The parent axis
The preceding axis
The preceding-sibling axis
The self axis
You can.
The text() node test selects a text node.
Predicates are enclosed in [ and ] and may contain any valid XPath 1.0 expression. | https://flylib.com/books/en/1.256.1.40/1/ | CC-MAIN-2019-13 | refinedweb | 101 | 55.74 |
03-30-2017 09:02 AM
03-30-2017 09:10 AM
If that option can be set programmatically, I would try looking in the DraftFilePreferences object instead of the DraftDocument object. (But I haven't seen a relatively obvious solution yet.)
03-30-2017 09:18 AM
I already looked in the DraftFilePreferences but wasn't able to see any change there.
On the left side is the draft with the option enabled and on the right with it disabled (I used Spy for Solid Edge).
Seems that all settings are the same.
04-05-2017 05:20 AM
Bump*
04-05-2017 09:51 AM
You need use Application.SetGlobalParameters() method, but you need find the specific constants that do you want (in this case, "Show empty callouts and text boxes")....
04-06-2017 05:26 AM
Hey KabirCosta,
Thanks for your reply.
I also checked all Global Constants by comparing them programatically.
The only thing that changed by using the enum values is the seApplicationGlobalSystemInfo and this one only changes because it has a timestamp and some system information like memory usage
Maybe its not possible to change this setting using a program?
Kind regards,
loginator
04-06-2017 08:38 AM
The only alternative I see is not the best one, but it can help solve your problem:
You can will do:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using SolidEdgeFramework; using SolidEdgeCommunity; using SolidEdgeDraft; using SolidEdgeFrameworkSupport; namespace DuvidaEmptyText { class Program { [STAThread] static void Main(string[] args) { // Main variables Application application = null; Documents documents = null; DraftDocument draftDocument = null; Sheets sheets = null; Sheet sheet = null; TextBoxes textBoxes = null; TextBox textBox = null; Layers layers = null; Layer layer = null; string emptyText = null; try { OleMessageFilter.Register(); // Solid Edge Connect application = SolidEdgeUtils.Connect(true, true); // Get Active document draftDocument = (DraftDocument)application.ActiveDocument; // Get sheets sheets = draftDocument.Sheets; // Find textbox sheet by sheet for (int iSheet = 1; iSheet <= sheets.Count; iSheet++) { sheet = sheets.Item(iSheet); // Get Layer and create a new layer, to hide text layers = sheet.Layers; layer = layers.Add("LayerToHide"); // Verify if the sheet contains textBoxes textBoxes = (TextBoxes)sheet.TextBoxes; if (textBoxes.Count > 0) { // Find if textBox contains text, if not, hide this for (int iTextBox = 1; iTextBox <= textBoxes.Count; iTextBox++) { textBox = textBoxes.Item(iTextBox); if (textBox.Text == emptyText) { textBox.Layer = "LayerToHide"; } } } // Finally, hide the empty texts layer.Show = false; } } catch (System.Exception ex) { Console.WriteLine(ex.Message); } finally { OleMessageFilter.Unregister(); } } } }
The final result is:
It isn't the best way, but works.
04-06-2017 09:45 AM
FYI, this setting is a file specific setting and is not a user or site setting. In other words, you will not find this setting saved to the user registry anywhere.
If you want this to always be Off for all files then you would need to modify your template file and set it to Off in the template. Then all new files created from the template would have it set to Off.
04-06-2017 09:48 AM
Hey KabirCosta,
maybe that's an option
Do you know if there's an event that fires when text of a textbox is changed?
04-06-2017 09:49 AM
Hey dave,
thanks for your help.
I already changed the setting in the template.
But I also want to change it in old documents.
Kind regards loginator | https://community.plm.automation.siemens.com/t5/Solid-Edge-Developer-Forum/Set-quot-Show-empty-callouts-and-text-boxes-quot-setting/td-p/400125 | CC-MAIN-2018-09 | refinedweb | 564 | 59.5 |
If you have C code that you want to use in your Simulink® model, you can call external C code to the model using a MATLAB Function block. MATLAB Function blocks call C code using MATLAB® commands. You can also generate code from models with MATLAB Function blocks that call external C code.
To call external C code in a Simulink model, follow these steps:
Identify the source (
.c) and header (
.h) files
that contain the C code you want to use in your model.
Insert a MATLAB Function block into your model.
In the MATLAB Function block, use the
coder.ceval function to call the C code. To pass data by reference, use
coder.ref,
coder.rref, or
coder.wref.
Specify the C source and header files in the Simulation
Target pane of the Configuration Parameters window. Include the header
file using double quotations, for example,
#include "program.h". If
you need to access C source and header files outside your working folder, list the path
in the Simulation Target pane, in the Include
Directories text box.
Alternatively, use the
coder.cinclude and
coder.updateBuildInfo functions to specify source and header files in your
MATLAB code. To develop an interface to external code, you can use the
coder.ExternalDependency class. To see which workflow is supported, see Import custom code.
Test your Simulink model and ensure it functions correctly.
If you have a Simulink Coder™ license, you can generate code for targets. To use the same source and header files for code generation, open Configuration Parameters, navigate to the Code Generation > Custom Code pane, and enable Use the same custom code settings as Simulation Target. You can also specify different source and header files.
To conditionalize your code to execute different commands for simulation and code
generation, you can use the
coder.target function.
coder.cevalin an Example MATLAB Function Block
This example shows how to call the simple C program
doubleIt from a
MATLAB Function block.
Create the source file
doubleIt.c in your current working
folder.
#include "doubleIt.h" double doubleIt(double u) { return(u*2.0); }
Create the header file
doubleIt.h in your current working
folder.
#ifndef MYFN #define MYFN double doubleIt(double u); #endif
Create a new Simulink model. Save it as
myModel.
In the Library Browser, navigate to the Simulink > User-Defined Functions library, and add a MATLAB Function block to the model.
Double-click the block to open the MATLAB Function Block Editor. Enter
code that calls the
doubleIt program:
function y = callingDoubleIt(u) y = 0.0; y = coder.ceval('doubleIt',u);
Connect a Constant block that has a value of
3.5
to the input port of the MATLAB Function block.
Connect a Display block to the output port.
Open the Configuration Parameters window, and navigate to the Simulation Target pane.
In the Insert custom C code in generated section, select
Header file and enter
#include
"doubleIt.h".
In the Additional build information section, select
Source files, enter
doubleIt.c, and click
OK.
Run the simulation. The value
7 appears in the
Display block.
When you call external C code by using MATLAB Function blocks or Stateflow®, you can control the type definitions for imported buses and enumerations in your model. Simulink can generate type definitions for you, or you can supply a header file containing the type definitions. You can control this behavior by toggling the Generate typedefs for imported bus and enumeration types parameter. To find this parameter, open the Configuration Parameters window, navigate to the Simulation Target pane, and expand the Advanced parameters section.
To configure Simulink to automatically generate type definitions, enable Generate typedefs for imported bus and enumeration types. To include a custom header file that defines the enumeration and bus types, clear Generate typedefs for imported bus and enumeration types and list the header file in the Header file text box.
coder.ceval |
coder.target |
coder.cinclude |
coder.updateBuildInfo |
coder.ExternalDependency |
coder.BuildConfig |
coder.ref |
coder.rref |
coder.wref | https://ch.mathworks.com/help/simulink/ug/incorporate-c-code-using-a-matlab-function-block.html | CC-MAIN-2021-43 | refinedweb | 664 | 50.53 |
Boy, I sure would love it if someone could help me figure this out. My professor has created some code that's a loop for enter numbers. I need to add code to make the program figure averages and standard deviation. I've added the formulas as I believe they should be, and the program runs, but when I test the program with some small numbers I can do in my head, the average and deviation is wrong. Here's the code, stripped of pretty formatting....
Code:
// Program Abstract: the purpose of this program is to determine the average and standard deviation of input numbers
// Input Required: list of numbers
// Output Desired: program will output the list of numbers,
// average and standard deviation of each number
// =============================================================================
#include <conio.h>
#include <fstream.h>
#include <iomanip.h>
#include <iostream.h>
#include <math.h>
//==============================================================================
// variable declarations
// alphabetized variable dictionary
// =========================================================================
// d is data
// i is spacing
// m is average
// n is count
// reply stores user's answer to queries
// s is standard deviation
// x is sum
int main() {
double d, i, m, n, s, x;
double average, stddev, sum, sumsqr;
char reply;
ofstream fout;
fout.open ("prompt.txt");
// introduce the program to determine average and
// standard deviation of a list of numbers
//==============================================================================
cout << setw (61) << "Program for average and standard deviation";
// press any key to continue
cout << setw (52) << "Press RETURN to continue";
getchar();
// run program until user is done
// loop for data entry, calculates sum and counter, ends after last while
do
{
// setting beginning numbers to 0
n = 0;
x = 0;
sumsqr = 0;
// loop B check and correct data entry
do
{
clrscr ();
// prompt the user for the next data
cout << setw (61) << "Enter next data, enter -1 to end data stream";
cout << ": ";
cin >> d;
do
{
clrscr ();
// echo print data read
cout << setw (37) << "Is " << d << " correct?";
cout << endl << endl << endl;
// query the user for correctness of input data
cout << setw (55) << "Enter y for yes or n for no: ";
// input response to query
cin >> reply;
tolower(reply);
}
while(reply!='n' && reply!='y');
if(reply=='y') {
++n ;
x += d;
sumsqr += d * d;
// record verified data in output textfile
fout << d << endl;
}
}
while (d != -1);
// calculate average
if ( n > 0 )
m = x / n;
else
m = 0 ;
// press any key to continue
cout << setw (52) << "Press RETURN to continue";
getchar();
// calculate standard deviation
if ( n > 1 )
s = sqrt ((n * sumsqr - x * x) / (n * (n - 1))) ;
else
s = 0;
clrscr ();
cout << setw (50) << "The average is: " << m;
cout << setw (50) << "The standard deviation is: " << s;
// press any key to continue
cout << setw (52) << "Press RETURN to continue";
getchar();
do
{
clrscr ();
// query user for another input data
cout << setw (55) << "Is there another set of data?";
cout << setw (55) << "Enter y for yes or n for no: ";
cin >> reply;
tolower(reply);
}
while ((reply != 'n') && (reply != 'N') &&
(reply != 'y') && (reply != 'Y'));
}
while ((reply == 'y') || (reply == 'Y'));
fout.close ();
return 0;
} | https://cboard.cprogramming.com/cplusplus-programming/61855-what-did-i-do-wrong-printable-thread.html | CC-MAIN-2017-26 | refinedweb | 481 | 53.85 |
Fixed Point Math
Limiting an Application to a Sin
A few years ago, I needed to use some data with decimal points on a DSP processor. The math library that came with the compiler was just too large and cumbersome. So, I wrote a fixed point library. This library is useful anywhere basic math needs to be done FAST. Microcontrollers don't use the floating point emulation, and PCs don't use the math coprocessor. With some tweaking, I have used it for basic animation and audio processing.
The fixed point library is very KISS. There is a class called "fixed" that essentially creates a new variable type similar to float or double.
#include <Fixed.h> #include <math.h> ... fixed a, b, c, d; float f; // Assign the vars a = 2.56890830294f; b = 10.374237497987f; // Multiply them c = a*b; f = a*b; // Display the results printf("%.6f * %.6f = %.6f = %.6f\n", (float)a,(float)b, (float)c, f);
// Assign the vars a = 2.56890830294f; b = 10.374237497987f; // compute them c = a.cos(); d = cosx(b); // Display the results printf(" cos(%.6f) = %.6f\n cosx(%.6f) = %.6f\n", (float)a, (float)c, (float)b, (float)d);
A lot of microcontrollers have limits to the integer sizes and other restrictions. There are a couple of #defines in the fixed.h header that account for some of these.
// Allow floating point input #define FIXED_HAS_DOUBLE // Allow longs #define FIXED_HAS_LONG
FIXED_HAS_DOUBLE defines whether the processor supports the "double" size. Comment it out if a "double" == "float".
FIXED_HAS_LONG defines whether the processor differentiates between "long" and "int". Comment it out if a "long" == "int".
Internally, the class uses "long long". This is just used to ramp up precision. If the processor or compiler do not support "long long", it will just reduce precision.precision.
Where you use this library class takes some thought. There is overhead in converting between "fixed" and "float", so converting to "fixed" for one operation is not recommended for speed. Also, the trigonometric functions are NOT optimized, so if you have a math coprocessor, it may be faster to use it for the trigonometric functions.
This library is for doing basic fixed point math using integers. It is also a great learning tool to learn how math libraries work.
There are no comments yet. Be the first to comment! | http://www.codeguru.com/cpp/cpp/algorithms/math/article.php/c12097/Fixed-Point-Math.htm | CC-MAIN-2016-26 | refinedweb | 388 | 77.33 |
Eclipse Community Forums - RDF feed Eclipse Community Forums Weaving a new model from existing ones <![CDATA[Hi, I am struggling with this for quite a while (being totally new to EMF) and since I couldn't really find an answer (but always found very qualified responses in this forum) I'm giving it a shot here. I try to weave 2 ecore-models, the BPMN20 model from here here and the R2ML model from here. As the latter provides only a somewhat unsatisfactory XSD for importing I instead used the r2ml.ecore from this project which does something similar to what I want to do. The aim is to be able to create a model in the generated editor which incorporates both models and the few new elements from a model I am weaving in between and to finally apply some OCL constraints for checking model consistency. Shouldn't be too hard, right? Well, I fetched the files from the repository (which are working flawlessly) and merged both the R2ML model and the other adaptions (that form my own model) into 2 subpackages of the BPMN model, pulled the necessary strings to attach them with each other (i.e. references, inheritance, etc.) and tried to create the genmodel. After playing around with the different namespaces (which is hopefully correct by now, at least the generated code throws me no longer errors) I could finally generate the edit and the editor code and try to run the latter. Alas, when trying to create a new example file with the recently generated editor I find two problems. 1) The models are not woven but I can seperatly chose creating a BPMN, R2ML or the adaptions model what is not intended as I want a only to model BPMN basis model only with some new elements and a model to work with 2) I get a "FeatureNotFoundException" as the feature "definitions" (the basic element of the BPMN model) is said to be not found...which is not really helpful to find the problem while trying to run the editor and to create a new example. I'd appreciate any clues to help me with this matter as I am not sure what or where to change to get this thing running. Best regards, Steffen]]> Steffen 2010-12-08T12:16:11-00:00 | http://www.eclipse.org/forums/feed.php?mode=m&th=201550&basic=1 | CC-MAIN-2014-49 | refinedweb | 389 | 56.18 |
?
Right click on the bar above the color, select use expression, then use rgb(r, g, b) with r, g, and b being values from 0-255.
I know I can do that, what I need is to find out the rgb from a hue.
Oh, sorry, misunderstood. Not sure how to do that.
Develop games in your browser. Powerful, performant & highly capable.
That are two different demands.
1) Hue
Will never stand alone, it is part of some models that represent RGB in another geometry. You will find 3 models, HSV, HSL, HSI (Hue Saturation [Value, Lightness, Intensity]). In case you need to convert from one of the first two models, Python is the easiest way:
import colorsys
my_rgb = colorsys.hsv_to_rgb(0.25, 0.5, 0.4)
r = my_rgb[0] * 255
g = my_rgb[1] * 255
b = my_rgb[2] * 255[/code:3prbcqlu]
[i]A more useful implementation[/i]
1. Put the following script as a subevent of 'Start of layout':
[code:3prbcqlu]import colorsys
class FromHSV(object):
def __init__(self):
self.__h = 0
self.__s = 0
self.__v = 0
def set_hue(self, hue):
self.__h = hue
def set_saturation(self, saturation):
self.__s = saturation
def set_value(self, value):
self.__v = value
def get_red(self):
return colorsys.hsv_to_rgb(self.__h, self.__s, self.__v)[0] * 255
def get_green(self):
return colorsys.hsv_to_rgb(self.__h, self.__s, self.__v)[1] * 255
def get_blue(self):
return colorsys.hsv_to_rgb(self.__h, self.__s, self.__v)[2] * 255
Convert = FromHSV()[/code:3prbcqlu]
Then, as soon as you have the HSV values, use any of these to fill the class:
[code:3prbcqlu]Convert.set_hue(yourhuehere)
Convert.set_saturation(yoursathere)
Conert.set_value(yourvalhere)[/code:3prbcqlu]
And finally, whereever you need the rgb, use the functions get_red, get_green or get_blue. In the rgb expression it would be:
[code:3prbcqlu]RGB(Python("Convert.get_red()"), Python("Convert.get_green()"), Python("Convert.get_blue()"))[/code:3prbcqlu]
2) From any range of numbers
I'm not sure what you mean. Basically, red, green and blue are expressed as values in the range [0, 255]. If you have values in another range, just normalize them and then map them to [0, 255]. For example, v in range [0, 1000] would become v / 1000 * 255, v in range [3, 20] would become (v - 3) / 17 * 255, etc.
Thanks, I'll look at that.
But just to clarify, the Sat & Val are always 100. I just need to retrieve the RGB with the HUE being the only changing variable.
Is there a simpler way? (preferably without using Python?)
I couldn't think of anything easier than calling a function "hsv_to_rgb"
If you don't want to use Python, you can always do your own conversion events. Converting between color models involves some math, here is one example code (or adapt the one from the shader):
If anyone is interested I figured it out:
RGB(Max(510*Abs(Cos([ANGLE]))-255, 0), Max(510*Abs(Cos([ANGLE]+120))-255, 0), Max(510*Abs(Cos([ANGLE]+240))-255, 0)) [/code:xnwb0n3t]
Thanks for the help.
tulamide your python class works great when setting the values if i set them directly like -
Convert.set_hue(0.25)
Convert.set_saturation(0.5)
Convert.set_value(0.4)
however when i do this to have EditBox input -
Convert.set_hue(EditBox.Value)
Convert.set_saturation(EditBox2.Value)
Convert.set_value(EditBox3.Value)
I am getting errors, so i am guessing this might be the wrong way to get EditBox values. Is there a way to make the EditBox inputs work?
The edit box only knows text. You access its content with 'Get text' (Editbox.Text). But Python does not autoconvert the text to a number. So use the Python-built-in function float(), e.g.:
Convert.set_hue(float(EditBox.Text))[/code:2v0yez47] | https://www.construct.net/en/forum/construct-classic/help-support-using-construct-38/hue-rgb-or-range-number-rgb-38809 | CC-MAIN-2020-16 | refinedweb | 623 | 59.4 |
XNA Game Studio
When:
IAsyncResult result = Guide.BeginShowStorageDeviceSelector(...);
while(result.IsCompleted == false)
{
}
StorageDevice device = Guide.EndShowStorageDeviceSelector(result);.
public class MyGameState
public StorageDevice Device;
public class MyGame : Game
protected override void LoadContent()
{
// Some object you want to have passed into the callback
MyGameState myGameState = new MyGameState();
Guide.BeginShowStorageDeviceSelector(StorageCompletedCallback, myGameState);
}
void StorageCompletedCallback(IAsyncResult result)
// Retrieve the state you passed in
MyGameState myGameState = (MyGameState)result.AsyncState;
// Complete the call and retrieve the selected StorageDevice
myGameState.Device = Guide.EndShowStorageDeviceSelector(result);
Hopefully if you’ve run into this problem with your games, you now have a better understanding of what is occurring, and how to fix it.
Since. :)
There.:
As detailed here, we've released an update to XNA Game Studio Express today. You can get an idea of what's in it by reading this announcement post. The primary goal of this release was to get official Vista support into the product. We also took the opportunity to fix a bunch of bugs and add a few new features. Many of the bugs we've fixed and features we added came directly from you, so I hope you enjoy this release and thanks!
Now on to the next release!
X!
Ohio State 24 Texas 7
I put my video for the Game Component demo up on the XNA Team Blog and forgot to mention it here. Oops! :) Connect Web:. After you sign in with your Windows Live ID, click on Feedback. From there, you can choose to file a bug or offer a suggestion for XNA Game Studio Express and/or the XNA Framework.
I'm working on the recording of the component demos and that tutorial. More later!
:) to see way better components over the coming weeks from the community! As mentioned earlier I'll also post a tutorial walking you through creating a component that displays the frame rate for your game. Just drop it onto your game and you're done! and start making games!
Last Friday I posted on our team blog details about the XNA Framework. Once we release the beta I'll post some tutorials and samples on my blog, including some GameComponent samples. Stay tuned!! | http://blogs.msdn.com/mitchw/ | crawl-002 | refinedweb | 359 | 55.95 |
Nov
8
Longest Words Followup – Java -v- Perl
Filed Under 42 (Life the Universe & Everything), Computers & Tech on November 8, 2010 at 6:40 pm.
So, this is the resulting code:
[java]
import java.util.Vector;
import java.util.Enumeration;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
public class dict{
public static void main(String args[]){
//declare the needed variables
String longestLeft=””, longestRight=””;
int minLength = 10;
Vector
Vector
try{
//open the dictionary file
File file = new File(args[0]);
BufferedReader reader = null;
reader = new BufferedReader(new FileReader(file));
// loop through the file
String line;
while((line = reader.readLine()) != null){
// remove the trailing new line character from the string
line = line.replaceAll(“\n|\r”, “”);
// check for characters on the right, if not, then it’s an all-left word
if(!line.toLowerCase().matches(“.*[yuiophjklnm].*”)){
if(line.length() >= minLength){
longLeftWords.add(line);
}
if(line.length() > longestLeft.length()){
longestLeft = line;
}
}
//vica-versa
if(!line.toLowerCase().matches(“.*[qwertasdfgzxcvb].*”)){
if(line.length() >= minLength){
longRightWords.add(line);
}
if(line.length() > longestRight.length()){
longestRight = line;
}
}
}
// close the dictionary file
reader.close();
}catch(Exception e){
System.out.println(“\n\nERROR – Failed to read the dictionary file ‘” + args[0] + “‘\n”);
e.printStackTrace();
System.exit(1);
}
// print the results
System.out.println(“\nLong words (at least ” + minLength + ” letters) with the left-side of the KB only:”);
Enumeration words = longLeftWords.elements();
while(words.hasMoreElements()){
System.out.println(“\t” + (String)words.nextElement());
}
System.out.println(“\t\t(total: ” + longLeftWords.size() + “)”);
System.out.println(“\nLong words (at least ” + minLength + ” letters) with the right-side of the KB only:”);
words = longRightWords.elements();
while(words.hasMoreElements()){
System.out.println(“\t” + (String)words.nextElement());
}
System.out.println(“\t\t(total: ” + longRightWords.size() + “)”);
System.out.println(“\nLongest left-only word: ” + longestLeft + ” (” + longestLeft.length() + ” letters)”);
System.out.println(“\nLongest right-only word: ” + longestRight + ” (” + longestRight.length() + ” letters)\n”);
}
}
[/java]
The obvious thing is that it’s longer than yesterday’s final delux Perl solution, about twice as long in fact. The code is also much wordier, with the lines being longer than in the Perl version. There’s also a heck of a lot of ‘fluff’ in Java. In perl it literally takes two characters (
<>), while in Java it takes about 6 when you include the mandatory exception handling. Getting a variable-length array is also far more cumbersome, using
java.util.Vector helps a lot, but it means you have to use
java.util.Enumeration to iterate through your vector for printing instead of a simple foreach loop like in Perl. Finally, notice how much clunkier the regular expressions are! Nothing as trivial as the
m operator in Perl in Java
OK, so the code is longer, more fluffy, and harder to read and write, but how does it run? The simple answer, slower! About three times slower in fact:
[code]
bartmbp:Temp bart$ time ./dict.pl /usr/share/dict/words >>/dev/null
real 0m0.761s
user 0m0.275s
sys 0m0.010s
bartmbp:Temp bart$ time java dict /usr/share/dict/words >>/dev/null
real 0m2.391s
user 0m2.230s
sys 0m0.121s
bartmbp:Temp bart$
[/code]
Given that Perl is a scripting language and Java is at least partially compiled, you’d expect Java to have the edge. But, when it comes to pattern matching, Perl is in its element, while Java is really rather lost. I think it’s Java’s poor RE engine that’s making the difference here.
So, there you have it, Perl really is quicker and simpler for messing with text. Who knew 😉 | https://www.bartbusschots.ie/s/2010/11/08/longest-words-followup-java-v-perl/ | CC-MAIN-2020-40 | refinedweb | 586 | 52.05 |
Groovy in One Day
Groovy in One Day
Join the DZone community and get the full member experience.Join For Free C:\Program Files\Groovy\Groovy-1.6.3 but as it seems to have problems with spaces in directories, I'll install it somewhere else (and without any version number in the directory name).
The installer is quite smart as it sets the environment variable for you and even register the .groovy and .gy with the groovy executable.
Groovy is released under the Apache 2.0 license.You also have a NetBeans plug-in. In Tools -> Plug-ins, choose Groovy and Grails, and restart the IDE.You can also run Groovy console with Web Start in the sandbox.
The language
The user guide is available at.
The keywords:
- Same as Java
- def, it, is, as
- for (in)
Some explanations:
- Methods and classes are by default public
- Inner classes are not supported
- return and ; are optional
The full list of reserved keywords is available here.
Operators:
Use .intdiv() to divide integers.
def displayName = user.name ?: "Anonymous" // Anonymous when user.name is null
parent?.child to avoid if (parent != null) ...
The main classes
- Standard Java classes
- GString
- GroovyServlet,SimpleTemplateEngine,GPath
- AntBuilder,SwingBuilder,MarkupBuilder
Helloworld
Writing the app
In NetBeans File -> New Project -> Samples -> Groovy -> Groovy-Java Demo.
The demo can already be started.
Delete the Java file and rename the other file as Helloworld.groovy and write
package demo
import groovy.swing.SwingBuilder
import javax.swing.JFrame
def swing = new SwingBuilder()
def frame = swing.frame(title:'Helloworld', size:[250,80]) {
def hello = label(text:"Helloworld!")
hello.font = hello.font.deriveFont(16.0f)
}
frame.show()
Documentation
Javadoc: As for JavaFX, the documentation and the examples seem to skip this step.
In Netbeans you can right-click on the project and choose Generate Javadoc, this will leave an error No source files and no packages have been specified.
Distribution
In Groovy\embeddable you have a groovy-all-1.6.3.jar which contains the classes needed to run you Groovy application if you only use classes from JavaSE and Groovy.
The build.xml includes a jar target, execute it will put the application files in the dist directory. You may want to edit nbproject/project.properties with dist.jar=${dist.dir}/helloworld.jar and remove Swing layout from the libraries (as not use for helloworld).
You may also want to replace dist\lib\groovy-all-1.5.5.jar with 1.6.3 as NetBeans plug-in comes with 1.5.5.
Concepts
Closure
Closure allows to consider a function as a variable type (a bit like java.lang.reflect.Method).
def uppercaseClosure = { it.toUpperCase() }
def list = 'a'..'g'
def uppercaseList = []
list.collect( uppercaseList, uppercaseClosure )
def loginSA = database.login("sa", "")
loginSA()
def printSum = { a, b -> print a + b }
def printPlus1 = printSum.curry(1)
printPlus1(7) // prints 8
Builders in Groovy are based on closures.
Builders
Builders allows to create structure object using declaration instead of calling methods. The syntax is similar to JSON.
Example of builders in Groovy are SwingBuilder (used for helloworld), MarkupBuilder (for XML), ObjectGraphBuilder (for POJO), AntBuilder/Gant, GraphicsBuilder, HTTPBuilder.
You can of course create your own builder.
GString
def name = "James" // normal String
def text = """\
hello there ${name}
how are you today?
""" // GString because it uses ${} and multiline with the """
Templates
Templates allows to insert text and function calls in a text.
def joe = [name:"John Doe"]
def engine = new SimpleTemplateEngine()
template = engine.createTemplate(text).make(joe)
assert template.toString() == "\nhello John Doe\nhow are you today?\n"
Use <% code %> to execute code/functions in the template.
Regular expressions
You can use java.util.regex.Matcher and java.util.regex.Pattern classes as in Java.
def pattern = ~/foo/ // Same as new Pattern("foo")
def matcher = "cheesecheese" =~ /cheese/ // Same as new Pattern("cheese").matcher("cheesecheese")
def matches = "cheesecheese" ==~ /cheese/ // Same as new Pattern("cheese").matcher("cheesecheese").matches();
The occurence of the matcher can be accessed as are collections. e.g.
matcher[1]
for the second match or
matcher[0, 1..2]
for a collection of the first 3 matches.
Collections
def list1 = [1, 2, 3, 4]
def list2 = 5..10
println("second element: ${list2[1]}")
def list3 = list2.findAll{ i -> i >= 7 } // Using closure to create a subset of list2
def list4 = list2[2..5] // Getting the same subset, 2 and 5 are indexes not values
def map1 = [name:"Gromit", likes:"cheese", id:1234]
def map2 = [(map1):"mouse"]
def list5 = list2*.multiply(2) // list5 contains list2 items * 2
Lists can also be defined in for and switch statements: for (i in 1..10) or case 1,2,6..10:
<< seems to be used to add elements but I couldn't find it in the documentation.
Classes and functions
You don't have that much documentation on how to do it.
package mypackage
import java.io.File
import groovy.swing.SwingBuilder
/**
* My class.
* @author Me
*/
class MyClass {
// class variable
def myVar = ""
/**
* My method
* @param text
* Some text.
*/
String addText(String text) {
myVar += text
}
void main(String[] args) {
print addText("hello")
}
}
Grails
I cannot talk about Groovy without mentioning Grails.Grails is a Server - Database framework inspired by Ruby on Rails and based on Spring + Hibernate. Grails heavily uses Groovy in order to minimize the code to write to create a server - database application.It uses GROM (Grails Object Relational Mapping) which is based on Hibernate.
The domain classes are simple POJOs containing the objects to manage/store/show. e.g. User, Book, Car, ... Contraints can be defined in the constraints closure. e.g.def constraints = { firstName(blank:false) }
For relationship use static hasMany = [books: Book], static mappedBy, static belongsTo.
The controllers are the action classes, the classes methods will be called when a form is submitted. e.g. class UserController { def doLogin = { ... } }
In configuration you have BootStrap.groovy to manage the application life cycle and DataSource.groovy to specify the location of the database.
The i18n directory contains the error messages.
The view (HTML pages) uses per default GSP (Groovy Server Pages).
To release the project, right click on the project and select Create War File.
Grails release includes a Petclinic demo.
Getting started articles here and here
There is a video demo on netbeans.tv
The reference documentation is available at.
Other
There are no examples with the release (except for ActiveX with Scriptom), examples are online
Code completion in NetBeans was weak.
Groovy supports annotations.
Groovy has bindings with groovy.beans.Bindable.
@Bindable String prop
textField(text:bind(source:this, sourceProperty:'prop'))
Grape is a system to include libraries in a repository. Grape will download the dependency Jar files if needed when starting your application. For Swing development, you have doOutside { } to execute code outside the EDT and inside it you can have edt { } when a part of the code needs to be on the EDT. Griffon is a groovy desktop application framework.
Integration with Java
From Groovy to Java, just use the Java class as you would do in Java. Note that 10.0 is a BigDecimal, 10.0f is a java.lang.Float and 10.0d is a java.lang.Double.
From Java to Groovy, use the javax.script.* classes.
Conclusion
Groovy introduces new concepts making code smaller to write. You need to make sure that the code compactness will not come to the cost of code readability. Buying a book could be useful for more examples and more documentation. Groovy developers seem to agree that the best IDE for Groovy/Grails is IntelliJ IDEA.
Pro's
- Builders
- Useful for client and server side applications
- Collection
- Feature rich
- Windows installer
Con's
- More complicated to learn than Java
- Examples are more code snippets than small applications. (More examples at sf.net)
- Version numbers everywhere
From
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/groovy-one-day | CC-MAIN-2020-40 | refinedweb | 1,308 | 61.02 |
SparkleXRM 7.3.0
An open-source library for building Dynamics CRM XRM solutions using Script#, jQuery & Knockoutjs.
Build client slide HTML webresources with all the productivity of c#. Migrate from Silverlight webresources using the MVVM data binding. Share code between the server and client.
There is a newer prerelease version of this package available.
See the version list below for details.
See the version list below for details.
Install-Package SparkleXRM -Version 7.3.0
dotnet add package SparkleXRM --version 7.3.0
<PackageReference Include="SparkleXRM" Version="7.3.0" />
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add SparkleXRM --version 7.3.0
The NuGet Team does not provide support for this client. Please contact its maintainers for support.
Release Notes
This release renames the root namespace from Xrm to SparkleXrm to avoid a conflict with the Xrm.Sdk namespace in the core platform.
You will need to re-compile your projects against the Script Assemblies - no changes to code should be required unless you have created scripts that are built without using Script#
Dependencies
.NETFramework 4.0
- ScriptSharp (>= 0.7.5.1)
- ScriptSharp.Lib.HTML (>= 0.7.5)
- ScriptSharp.Lib.jQuery (>= 0.7.5)
- ScriptSharp.Lib.jQuery.UI (>= 0.7.5)
- ScriptSharp.Lib.Knockout (>= 0.7.5)
GitHub Usage
Showing the top 1 GitHub repositories that depend on SparkleXRM: | https://www.nuget.org/packages/SparkleXRM | CC-MAIN-2019-51 | refinedweb | 232 | 69.48 |
September Reports (see ReportingSchedule)
ActiveMQ is ready and would prefer to become a TLP. Once the 4.0.2 release is completed expect more serious discussions regarding graduation to pop up on the incubator mailing lists.
ADF Faces
The ADF Faces / Trinidad project solved lot's of todos. We repackaged the software to get rid of adf inside the namespace. We also renamed some of the JSF components. We managed to get a website and deployed it the the incubator site. We created a first RC of our maven2 plugins, which is currently under review phase by some Incubator PMC members. The size of committers is grwoing. Added two new committers to the project during the last three month. Users (or developers) action is much beyond from just sending questions. Jira is a important fact of this community, where users apply patches to. The community is still growing. In August we had 438 sent to the developers list. In July it have been 266.
Cayenne
Finished tasks:
- Finished switch to the ASF infrastructure.
- Finished relicensing files.
- Received CLAs from all contributors but Gary Jarrel
- Released Cayenne 1.2 externally to Apache
- Voted for a new PPMC member (member's acceptance is still pending)
- Mentored 3 students as a part of Summer of Code
- Switched the code to Maven
- Had discussions with Geronimo project on JPA integration.
Scheduled:
- Will rewrite those few pieces by Gary Jarrel to finish out IP issues, then
- Will release Apache Podling Cayenne 2.0 in a week or two..
In addition to coding, there has been some effort to get a website up and going, but we're currently debating the best tools for the job. Also, we've also discussed whether or not we want to change the name, but their has been no clear consensus.
log4net
Work continues on the next point release containing minor bug fixes. We are evaluating several enhancements that take advantage of the new features in .NET 2.0. This raises backwards compatibility questions.
We have an open ended discussion on the use and storage of strong name assembly signing keys. This may need further discussion at wider level, and probably requires some sort of consensus amongst all the .NET projects.
log4php
No important news to report. Mailing lists still maintain low activity.
The OFBiz community has now completed all the tasks required by the incubation process. Since the last board report, the following items has been completed:
- source code moved to the Apache Incubator SVN server (since 2006-07-01)
- web site cleaned up and migrated to the Apache Incubator server (since 2006-07-02)
- for details
mod_ftp
Work is progressing on updating the build system so that it is possible to have it folded right into the 2.2.x or trunk version of httpd. The httpd PMC is being contacted to determine where the module should go, since there is currently discussion and debate on the concept of module "sub-projects" with httpd.
We have an offer from Noirin Plunkett to convert the current docs to the current httpd format. This is also something that should be done before graduation.
In hindsite, the mod_ftp podling should have been in place just to do the IP vetting, in which case it would have graduated long ago (this is how it would have been done if submitted today). Lesson learned.
OpenJPA
The code arrived this quarter, and that helped the community as there's now something to discuss.
The initial code drop emerged from BEA and is now actively being worked on. There have been community discussions on new features, documentation, and release numbering. The community decided to use cwiki and adding documentation for the project.
The community added.
OpenEJB
We've just heard from Matt who's a release manager for Apache Geronimo 1.1.1 that the last reason to keep OpenEJB at Codehaus had been cleared and we're ready to move JIRA and repos to ASF.
Three new people have showed their interest in the code and started to contribute - Mohammed Nour, Rick McGuire and Jay D. McHugh.
SVN as successfully been moved from Codehaus to Apache.
Jira migration is being coordinated. This will be a migration to a new instance in ASF similar to Cayenne. However, we'd really like to run in the main instance. To facilitate this, work on a Jira migration tool is also underway. We hope that this will be useful to other projects migrating or whom have migrated. | https://wiki.apache.org/incubator/September2006?highlight=ServiceMix | CC-MAIN-2019-04 | refinedweb | 749 | 65.52 |
pwrite, write - write on a file
#include <unistd]. write() function is unspecified.
[XSR]().
[XSI]() [XSI] and pwrite() shall return the number of bytes actually written to the file associated with fildes. This number shall never be greater than nbyte. Otherwise, -1 shall be returned and errno set to indicate the error. marked O_NONBLOCK, and write would block.
- [ECONNRESET]
- A write was attempted on a socket that is not connected.
- [EPIPE]
- A write was attempted on a socket that is shut down for writing, or is no longer connected. In the a STREAMS file may fail if an error message has been received at the STREAM head. In this case, errno is set to the value included in the error message.
The write().
[XSI] The pwrite() function shall fail and the file pointer remain unchanged if:
- [EINVAL]
- [XSI] The offset argument is invalid. The value is negative.
- [ESPIPE]
- [XSI] fildes is associated with a pipe or FIFO. IEEE Std 1003.1-2001.
- Deferred:
- ret=-1, errno=[EAGAIN]>
First released in Issue 1. Derived from Issue 1 of the SVID.
The DESCRIPTION is updated for alignment with the POSIX Realtime Extension and the POSIX Threads Extension.
Large File Summit extensions are added.
The pwrite() function is added.
The DESCRIPTION states that the write() function does not block the thread. Previously this said "process" rather than "thread".
The DESCRIPTION and ERRORS sections are updated so that references to STREAMS are marked as part of the XSI STREAMS Option Group. number of bytes written, or whether it returned -1 with errno set to [EINTR]. This is a FIPS requirement.
-
The following changes are made to support large files:
-
For regular files, no data transfer occurs past the offset maximum established in the open file description associated with the fildes.
-
A second [EFBIG] error condition is added.
- sockets: [EAGAIN], [EWOULDBLOCK], [ECONNRESET], [ENOTCONN], and [EPIPE].
The [EIO] error is made optional.
The [ENOBUFS] error is added for sockets.
The following error conditions are added for operations on generated to the calling process" to "a SIGPIPE signal shall also be sent to the thread".
IEEE Std 1003.1-2001/Cor 2-2004, item XSH/TC2/D6/147 is applied, making a correction to the RATIONALE. | https://pubs.opengroup.org/onlinepubs/009695399/functions/write.html | CC-MAIN-2022-21 | refinedweb | 368 | 68.57 |
gus_client 0.5.20
Connect to GUS
Easy client for connecting to GUS our internal Bug Tracking System. If you don’t work for salesforce.com, this package won’t be very useful to you.
The client acts as a wrapper for the simple_salesforce package which uses the Salesforce REST API.
Additionally, the client will persist your username and security token to make logins a bit easier. Most users only know their password.
Ideally you would extend the gus.Gus.Client and add methods to do what you need using simple_salesforce format:
from gus.Gus import Client
gus = Client();
…will attempt to log into Gus and prompt you for a username, password and security_token. If successful, it will persist your username and security token along with the current session token in ~/.r6_local_data and use it next time. If your session token expires, it will prompt you to log in again.
There are a number of client modules that can be used:
BacklogClient - Lists and modifies work items in gus DependencyClient - Lists and compiles team, release and work dependency trees ThemeClient - Lists themes ScrumTeamClient - Information on scrum teams
An example of how to use this client
from gus.BacklogClient import BacklogClient gus = BacklogClient() buildid = gus.find_build_id(‘MC_185’) work = gus.find_work(‘W-1749572’) gus.mark_work_fixed(work[‘Id’], buildid)
- Downloads (All Versions):
- 23 downloads in the last day
- 460 downloads in the last week
- 1950 downloads in the last month
- Author: Shawn Crosby
- License: Keep it real
- Package Index Owner: shawncrosbys
- DOAP record: gus_client-0.5.20.xml | https://pypi.python.org/pypi/gus_client | CC-MAIN-2015-40 | refinedweb | 255 | 56.66 |
A new feature of .NET 2.0 and Visual Studio 2005 is the ability to save a user's application settings in a user.config file that is saved in the user's desktop profile.
Until .NET 2.0, it was difficult to save a user's settings for an application. User settings had to be saved in the registry, in an .ini file, or in a custom text file. Now, it is possible to save settings for a user's application that will even move with them in a roaming desktop profile.
To view the non-roaming settings, open the user.config file located at %USERPROFILE%\Local Settings\Application Data\<Company Name>\<appdomainname>_<eid>_<hash>\<verison>\user.config.
To view the roaming user settings, open the user.config file located at %USERPROFILE%\Application Data\<Company Name>\<appdomainname>_<eid>_<hash>\<verison>\user.config.
To add application or user settings to a project, right-click on the project name in the Solution Explorer window, and click Properties. Then, click on Settings in the left tab list.
When a setting is added to the Visual Studio designer, a public property is created in the
My.Settings namespace. Depending on the scope of the setting, the property will be
ReadOnly or writable. This allows you to programmatically change the user setting values and save them with the
My.Settings.Save() method.
A second way to save settings is to enable the 'Save My.Settings on Shutdown' setting in the application. To do this, right-click on the project name in the Solution Explorer window, and click Properties. Then, click on Application in the left tab list.
To restore the last saved dimensions of a form, we set the size and location of the form from the user settings in the form's
Load event.
Private Sub Form1_Load _ (ByVal sender As Object, ByVal e As System.EventArgs) _ Handles Me.Load 'Set textboxes and form name from application and user settings 'Notice how the application setting property is ReadOnly Me.Text = My.Settings.MainFormText 'Notice how the user settings are writable Me.Size = My.Settings.MainFormSize Me.Location = My.Settings.MainFormLocation Me.txtFormText.Text = My.Settings.MainFormText 'Show the form now Me.Visible = True End Sub
In the form's
FormClosing event, we save the form's size and location values to the current values.
Private Sub Form1_FormClosing _ (ByVal sender As Object, _ ByVal e As System.Windows.Forms.FormClosingEventArgs) _ Handles Me.FormClosing Try 'Set my user settings MainFormSize to the 'current(Form) 's size My.Settings.MainFormSize = Me.Size 'Set my user setting MainFormLocation to 'the current form's location My.Settings.MainFormLocation = Me.Location 'Save the user settings so next time the 'window will be the same size and location My.Settings.Save() MsgBox("Your settings were saved successfully.", _ MsgBoxStyle.OkOnly, "Save...") Catch ex As Exception MsgBox("There was a problem saving your settings.", _ MsgBoxStyle.Critical, "Save Error...") End Try End Sub
You can use application and user settings for many things in .NET 2.0. Just remember that if you want to change the settings then the scope has to be set as User.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/vb/appsettings2005.aspx | crawl-002 | refinedweb | 539 | 50.63 |
A reader sent me a link to a list of points that make Gmail really great. I'm not sure whether or not these points are enough to convince me that Gmail is fantastic, but I admit that it does do some things well (Hotmail does a few things well, also).
One of the things that I found interesting from supporting points is that Gmail allows you to find out who sold your email address to spammers.
Here is how to use it:
This assumes, of course, that you're going to enter in the website where you sign up to receive stuff or get access to something else. If you do that then I think that you're going to have a pretty good clue who is farming out your email address to spammers. It lets you track down those guys who say they'll protect your privacy but really don't.
The one drawback to this is that some web pages may get wise to this trick and start cleaning up email addresses by removing everything after and including the + sign up to the @ symbol. Until they do, I think that this trick has some merit.
This is a known hack for those users who use procmail for message filtering. Unfortunately, there is nothing to stop the address sellers from parsing the LHS of the email address and discard everything from the plus sign up to the at sign.
You are aware that this has been available in Sendmail and Postfix (and most likely other MTAs) since many, many years?
yeah, this is an old UNIX trick. works great ;)
Old school trick that the spammers are well aware of I'm afraid.
Postfix has had VERP support since version 1.1 and the recipiennt_delimiter configuration even longer.
See for more info.
I actually use a variant of this on my home mail server that fixes the problem of spammer's knowing this trick: Don't make the LHS of the + version of your email the same as your real email address.
So my real email address is mdouglass@...
When I give out my email address, I give out md+website@...
If they drop the +website and just send to md@..., the email is thrown away as obvious spam.
Yes, it's more polluting of the email namespace, but there's no way for the spammer to get back to my real address and I can still track down who sends me my spam (which is an interesting list, btw).
I'll also note that this occassionally gets funny reactions when you have to speak to real people at a company. Someone at vonage gave me a month free because they thought it was so cool I loved their service enough to have it in my email address. I tried to explain, but she just didn't understand.
I find services like sneakemail.com to be preferable. I can generate as many unique addresses as I want and label each with the web site name I will use it at, I can track how many emails each sneakemail address is receiving (and who they were from), and I can simply delete any sneakemail address that is being spammed.
FYI (as a developer of FastMail) I'd like to point out we support this with a few extra tricks as well.
1. You get an entire sub-domain. So if your account is joe@fastmail.fm, you can sent email to anything@joe.fastmail.fm and it'll get to your account. This is more supported in webforms than +'s as well. Internally anything@joe.fastmail.fm is transformed to joe+anything@fastmail.fm
2. If you have a folder called "anything", then sending to joe+anything@fastmail.fm or anything@joe.fastmail.fm will automatically go to that folder. Additionally, it'll "fuzzy match" folder names, with case-insensitive matching, and with _, - and space all being equal. Also . will act as a folder separator. So if you send to mailing-lists.listname@joe.fastmail.fm and have a folder called "Mailing Lists/ListName", it'll automatically be put in that folder | http://blogs.msdn.com/b/tzink/archive/2008/05/22/gmail-has-an-interesting-idea-to-thwart-spammers.aspx?Redirected=true | CC-MAIN-2014-23 | refinedweb | 693 | 72.66 |
MobX Little Router
A view-agnostic router that uses MobX to manage its internal state. Built to handle the complex requirements of modern-day, universal web applications.
Implementation of an universal router with MobX as the state management solution.
Why?
Our development team use MobX as the state management library for most of our applications.
Routing has become increasingly complex in recent times, and we believe that routing libraries should be updated to reflect this reality. We've tried several existing routers in the React + MobX ecosystem, but none has met our requirements or functioned exactly the way we want it to. And so we built our own router.
Here are what you get from mobx-little-router out of the box.
- Static type support for Flow.
- State management and change detection that lives completely within MobX. This means you have a single source of truth and a single place to change all data in your application.
- Support for dynamically loaded routes on both client and server environments. This is key for building modern-day progressive web apps.
- Middleware layer that provides extensibility to the router.
- Server-side rendering support (SSR) and integration with express server.
- View-agnostic routing capabilities. This means adapters other than React can be created by hooking into the router state.
Quick start
If you are using React, then you'll need to install three modules.
npm i --save [email protected] mobx-little-router mobx-little-router-react # Or with yarn yarn add [email protected] yarn add mobx-little-router yarn add mobx-little-router-react
Note:
history is a third-party peer dependency of the Router. It abstracts away history management
between different JavaScript environments. Learn more here.
Then you can create a Hello World app as follows.
import React from 'react' import ReactDOM from 'react-dom' import { createBrowserHistory } from 'history' import { install, Outlet, RouterProvider } from 'mobx-little-router-react' const Home = () => <h1>Hello, World!</h1> const router = install({ history: createBrowserHistory(), routes: [ { path: '', component: Home } ] }) router.start(() => { // The <Outlet/> element outputs the matched route. ReactDOM.render( <RouterProvider router={router}> <Outlet /> </RouterProvider>, document.getElementById('root') ) })
For a more comprehensive React example, you can explore the client
and server examples.
UMD build
You can play around with the UMD version of the router by including three scripts:
e.g.
HTML:
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width"> <title>JS Bin</title> <script src="[email protected]/umd/history.js"></script> <script src="[email protected]/lib/mobx.umd.js"></script> <script src="[email protected]/umd/mobx-little-router.js"></script> </head> <body> </body> </html>
JS:
let h = History.createMemoryHistory({ initialEntries: ['/a'] }) let router = mobxLittleRouter.install({ history: h, routes: [ { path: 'a' }, { path: 'b' }, { path: 'c' } ] }) mobx.autorun(() => { console.log(`pathname is ${router.location.pathname}`) }) router.start(() => router.push('/b').then(() => router.push('/c') ) )
Output:
"pathname is undefined" "pathname is /a/" "pathname is /b/" "pathname is /c/"
Running examples
Install modules by running
yarn, then run
yarn start and follow the prompts. | https://reactjsexample.com/react-adapter-components-for-mobx-router/ | CC-MAIN-2021-21 | refinedweb | 499 | 50.94 |
Resource Limits
From Linux-VServer
Most properties related to system resources, might it be the memory consumption, the number of processes or file-handles, qualify for imposing limits on them.
The Linux kernel provides the getrlimit and setrlimit system calls to get and set resource limits per process. Each resource has an associated soft and hard limit. (one with the CAP_SYS_RESOURCE capability) may make arbitrary changes to either limit value.
The Linux-VServer kernel extends this system to provide resource limits for whole contexts, not just single processes. Additionally a few new limit types missing in the vanilla kernel were introduced.
Additionally the context limit system keeps track of observed maxima and resource limit hits, to provide some feedback for the administrator. See Context Accounting for details.
List of Resource Limits
Below is a list of resource limits used for contexts and processes within. The tables contain the following information:
- ID
- Resource limit ID
- Name
- Human readable identifier used in userspace utilities
- procfs
- Name used in /proc/virtual/*/limit
- ulimit
- command line switch for the ulimit utility
- Unit
- Aprropriate unit for the limit
- Tag
- Special resource limit code to denote if resources are accounted, enforced (see below)
- Description
- Description of capability/flag effects
Special Resource Limit Codes
The tag column may contain one or more of the following tags:
Linux Resource Limits
Below is a list of resource limits available in vanilla Linux (>=2.6.18).
Linux-VServer Resource Limits
Below is a list of additional resource limits available in the Linux-VServer kernel.
Determinig the Page Size
You can use the following program to determine the page size for your architecture (if it supports the getpagesize() function)
#include <stdlib.h> #include <stdint.h> #include <unistd.h> int main(int argc, char *argv[]) { int page_size = getpagesize(); printf("The page size is %d\n", page_size); exit(0); }
Here's how to compile and run it (assuming you save it as pagesize.c):
# gcc pagesize.c -o pagesize # ./pagesize The page size is 4096
Note:.
If you prefer, you can get the pagesize using python, just start a python console and write:
>>> import resource >>> resource.getpagesize() 4096
Configure Resource Limits in util-vserver
For example, if the adress space size (AS) and the resident set size (RSS) should be limited, the appropriate config files should be the following. You might have to create parent directories first.
# ls -al /etc/vservers/myguest/rlimits total 28 drwxr-xr-x 2 root root 4096 2005-08-24 12:37 . drwxr-xr-x 5 root root 4096 2005-08-24 00:22 .. -rw-r--r-- 1 root root 6 2005-08-24 12:43 as -rw-r--r-- 1 root root 6 2005-08-24 12:37 rss # cat /etc/vservers/myguest/rlimits/as 90000 # cat /etc/vservers/myguest/rlimits/rss 10000 | http://www.linux-vserver.org/index.php?title=Resource_Limits&oldid=1844 | CC-MAIN-2016-26 | refinedweb | 466 | 54.02 |
Header:
#include <[headers/TestDB.h CUnit/TestDB.h]> (included automatically by <CUnit/CUnit.h>)
typedef struct CU_TestRegistry
typedef CU_TestRegistry* CU_pTestRegistry
CU_ErrorCode CU_initialize_registry(void)
void CU_cleanup_registry](void)
CU_BOOL CU_registry_initialized](void)
CU_pTestRegistry CU_get_registry](void)
CU_pTestRegistry CU_set_registry](CU_pTestRegistry pTestRegistry)
CU_pTestRegistry CU_create_new_registry](void)
void CU_destroy_existing_registry](CU_pTestRegistry* ppRegistry)
The test registry is the repository for suites and associated tests. CUnit maintains an active test registry which is updated when the user adds a suite or test. The suites in this active registry are the ones run when the user chooses to run all tests.
The CUnit test registry is a data structure CU_TestRegistry declared in [headers/TestDB.h <CUnit/TestDB.h>]. It includes fields for the total numbers of suites and tests stored in the registry, as well as a pointer to the head of the linked list of registered suites.
typedef struct CU_TestRegistry
{
unsigned int uiNumberOfSuites;
unsigned int uiNumberOfTests;
CU_pSuite pSuite;
} CU_TestRegistry;
typedef CU_TestRegistry* CU_pTestRegistry;
The user normally only needs to initialize the registry before use and clean up afterwards. However, other functions are provided to manipulate the registry when necessary.
The active CUnit test registry must be initialized before use. The user should call CU_initialize_registry() before calling any other CUnit functions. Failure to do so will likely result in a crash.
An error status code is returned:
This function can be used to check whether the registry has been initialized. This may be useful if the registry setup is distributed over multiple files that need to make sure the registry is ready for test registration.
When testing is complete, the user should call this function to clean up and release memory used by the framework. This should be the last CUnit function called (except for restoring the test registry using CU_initialize_registry()] or CU_set_registry()).
Failure to call CU_cleanup_registry() will result in memory leaks. It may be called more than once without creating an error condition. Note that this function will destroy all suites (and associated tests) in the registry. Pointers to registered suites and tests should not be dereferenced after cleaning up the registry.
Calling CU_cleanup_registry() will only affect the internal CU_TestRegistry maintained by the CUnit framework. Destruction of any other test registries owned by the user are the responsibility of the user. This can be done explictly by calling CU_destroy_existing_registry(), or implicitly by making the registry active using CU_set_registry() and calling CU_cleanup_registry() again.
Other registry functions are provided primarily for internal and testing purposes. However, general users may find use for them and should be aware of them.
These include:
Returns a pointer to the active test registry. The registry is a variable of data type CU_TestRegistry. Direct manipulation of the internal test registry is not recommended - API functions should be used instead. The framework maintains ownership of the registry, so the returned pointer will be invalidated by a call to CU_cleanup_registry() or CU_initialize_registry().
Replaces the active registry with the specified one. A pointer to the previous registry is returned. It is the caller's responsibility to destroy the old registry. This can be done explictly by calling CU_destroy_existing_registry() for the returned pointer. Alternatively, the registry can be made active using CU_set_registry() and destroyed implicitly when CU_cleanup_registry() is called. Care should be taken not to explicitly destroy a registry that is set as the active one. This can result in multiple frees of the same memory and a likely crash.
Creates a new registry and returns a pointer to it. The new registry will not contain any suites or tests. It is the caller's responsibility to destroy the new registry by one of the mechanisms described previously.
Destroys and frees all memory for the specified test registry, including any registered suites and tests. This function should not be called for a registry which is set as the active test registry (e.g. a CU_pTestRegistry pointer retrieved using CU_get_registry()). This will result in a multiple free of the same memory when CU_cleanup_registry()is called. ppRegistry may not be NULL, but the pointer it points to may be. In that case, the function has no effect. Note that *ppRegistry will be set to NULL upon return.
The following data types and functions are deprecated as of version 2. To use these deprecated names, user code must be compiled with USE_DEPRECATED_CUNIT_NAMES defined.
#include <[testdb_h CUnit/TestDB.h]> (included automatically by CUnit/CUnit.h>). | http://code.google.com/p/c-unit/wiki/test_registry | crawl-002 | refinedweb | 718 | 57.37 |
.
Charts Usage w/ Angular 2 (Version 2.0.1)
Hello,
I am having some trouble getting charts to work correctly with my angular 2 application. I followed the Usage with Angular 2 instructions under the Vaadin Charts - Elements API section in the docs and I am having issues trying to use certain options for the charts. I can see that the docs haven't been updated to the final angular 2 release yet, but I was hoping I could figure it out myself and just add the VaadinCharts and DataSeries imports to my app.module and then under the declarations. It seems to work with using the charts, as the chart will display with data, but when I try to use things in my HTML, like <chart>, <chart-title>, <x-axis> and others, I get errors in my web browser saying that those are not known elements. It mentions that if it's a web component, add CUSTOM_ELEMENTS_SCHEMA to the @NGModule.schemas, but that didn't help.
Does anyone know if I am missing something else or are Vaadin charts just not compatible with Angular 2 final release yet? Thanks for any help or suggestions.
Hi and sorry for the late response!
There is a pull request to vaadin-charts to update documentation and tests for Angular2 final version. TL;DR Support for directives will be deprecated and we will promote the use of angular2-polymer directive with the vaadin-charts. angular2-polymer works with the final version of Angular and you can find the docs for that in here. But one thing that has to be changed is to change your app to use NO_ERRORS_SCHEMA, as elements like <chart> are not supported with the custom elements syntax.
import { NgModule, NO_ERRORS_SCHEMA } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { PolymerElement } from '@vaadin/angular2-polymer'; import { AppComponent } from './app.component'; @NgModule({ imports: [BrowserModule], declarations: [AppComponent, PolymerElement('vaadin-pie-chart')], bootstrap: [AppComponent], schemas: [NO_ERRORS_SCHEMA] }) export class AppModule { } | https://vaadin.com/forum/thread/14263211/charts-usage-w-angular-2-version-2-0-1 | CC-MAIN-2022-40 | refinedweb | 327 | 53.31 |
JUnit is the glue that holds many open source projects together. But JUnit has problems performing multithreaded unit tests. This creates considerable difficulty for middleware developers in the open source J2EE market. This article introduces GroboUtils, a JUni t extension library designed to address this problem and enable multithreaded unit testing in JUnit. A basic understanding of JUnit and threads is recommended but not necessary for readers of this article.
Introduction
If you've worked on open source Java projects or read the mult itude of books on "Extreme Programming" and other rapid-development models, then you've likely heard about JUnit. Written by Erich Gamma and Kent Beck, JUnit is an automated testing framework for Java. It allows you to define "unit tests" for your software -- programs that test whether or not the code is functioning properly, usually on a method-by-method basis.
JUnit can help your development team in many ways -- too many to cover in one article. But from one developer to another, JUnit truly excels at two things:
It forces you to use your own code. Your tests function as client code for your production code. Getting to know your software from the client's perspective can help you identify problems in the API and improve how the code will eventually get used.
It gives you confidence to make changes in your software. You'll know right away if you've broken the test cases. At the end of the day, if the light is green, the code is clean. Check it in with confidence.
But JUnit is no silver bullet. Third-party extension libraries such as HttpUnit, JWebUnit, XMLUnit, and a host of others have risen to address perceived holes in the framework by adding functionality. One of the areas JUnit doesn't cover is multithreaded unit tests.
In this article, we're going to look at a little-known extension library that solves this problem. We'll start by setting up the JUnit framework and running an example to show poor use of threads in testing. After we've identified the obstacles of threaded testing, we'll walk through an example using the GroboUtils framework.
Threads in Review
For those of you new to threads, it's all right to panic a bit at this point -- just don't overdo it. Get it out of your system. We're going to take a fifty-thousand-foot view of threads. Threads allow your software to multitask -- that is, do two things at the same time.
In their book A Programmer's Guide to Java Certification, Khalid Mugal and Rolf Rasmussen briefly describe threads as follows:
"A thread is a path of execution within a program, that is executed separately. At runtime, threads in a program have a common memory space and can therefore share data and code; i.e., they are lightweight. They also share the process running the program.
Java threads make the runtime environment asynchronous, allowing different tasks to be performed concurrently." (p.272)
In web applications, many users can send requests to your software at the same time. When writing unit tests to stress your code, you need to simulate that sort of concurrent traffic. This is especially true if you're trying to develop robust middleware components. Threaded tests would be ideal for these components.
Unfortunately, JUnit is lacking in this arena.
Problems with JUnit and Multithreaded Tests
If you want to try out the following code you need to downloadand install JUnit. Instructions for doing so can be found at the JUnit web site. Without delving too far into details, we're going to briefly examine how JUnit works. To write a JUnit test, you must first create a test class that extends
junit.framework.TestCase, the basic unit test class in JUnit.
The
main() and
suite() methods are used to start the tests. From the command line or from an IDE, make sure that junit.jar is in your classpath, then compile and run the following code for the
BadExampleTestclass.
import junit.framework.*; public class BadExampleTest extends TestCase { // For now, just verify that the test runs public void testExampleThread() throws Throwable { System.out.println("Hello, World"); } public static void main (String[] args) { String[] name = { BadExampleTest.class.getName() }; junit.textui.TestRunner.main(name); } public static Test suite() { return new TestSuite( BadExampleTest.class); } }
Run
BadExampleTest to verify that everything has been set up correctly. Once the
main() method is called, the framework will automatically execute any method whose name begins with "test". Go ahead and try to run the test class. If you've done everything correctly, it should kick out the message "Hello World" in the output window.
Now, we're going to add a thread class to the program. We're going to do this by extending the
java.lang.Runnableinterface. Eventually, we'll switch our strategy and extend a class that automates thread creation.
Create a private inner class called
DelayedHellothat extends
Runnable. The call to
run()is implicit in the
DelayedHello constructor, where we create a new
Thread and call its
start()method.
import junit.framework.*; public class BadExampleTest extends TestCase { private Runnable runnable; public class DelayedHello implements Runnable { private int count; private Thread worker; private DelayedHello(int count) { this.count = count; worker = new Thread(this); worker.start(); } public void run() { try { Thread.sleep(count); System.out.println( "Delayed Hello World"); } catch(InterruptedException e) { e.printStackTrace(); } } } public void testExampleThread() throws Throwable { System.out.println("Hello, World"); //1 runnable = new DelayedHello(5000); //2 System.out.println("Goodbye, World"); //3 } public static void main (String[] args) { String[] name = { BadExampleTest.class.getName() }; junit.textui.TestRunner.main(name); } public static Test suite() { return new TestSuite( BadExampleTest.class); } }
The method
testExampleThread() isn't really much of a test method. In practice, you want the tests to be automated and don't want to ever have to check output to the console. But here, the point is to demonstrate that JUnit does not support multithreading.
Note that
testExampleThread() performs three tasks:
- Prints "Hello, World".
- Initializes and starts a thread that is supposed to print "Delayed Hello World".
- Prints "Goodbye, World".
If you run this test, you'll notice something wrong. The
testHelloWorld() method runs and ends as you would expect it to. It fires off the thread without any exceptions. But you never hear back from the thread. Notice you never see "Delayed Hello World". Why? Because JUnit finishes execution while the thread is still alive. There could have been problems down the line, toward the end of that thread's execution, but your test would never reflect it.
The problem lies in JUnit's
TestRunner. It isn't designed to look for
Runnable instances and wait around to report on their activities. It fires them off and forgets about them. For this reason, multithreaded unit tests in JUnit have been nearly impossible to write and maintain.
Enter GroboUtils
GroboUtils is an open source project written by Matt Albrecht that aims to expand the testing possibilities of Java. GroboUtils is distributed under the MIT license, which makes it very friendly for inclusion in other open source projects.
GroboTestingJUnit Subproject
GroboUtils is broken into subprojects that experiment with similar aspects of testing. This article focuses on the GroboTestingJUnit subproject, an extension library for JUnit that introduces support for multithreaded testing. (This subproject also introducesIntegration Tests and the concept of failure severity, but those features fall outside of the scope of this article.)
Within the GroboTestingJUnit subproject is
GroboTestingJUnit-1.1.0-core.jar. It contains the
MultiThreadedTestRunner and
TestRunnable classes, both of which are necessary for extending JUnit to handle multithreaded tests.
TestRunnable
The
TestRunnable class extends
junit.framework.Assert and implements
java.lang.Runnable. You should define
TestRunnable objects as inner classes inside of your tests. Although traditional thread classes implemented a
run() method, your nested
TestRunnableclasses must implement the
runTest() method instead. The
testRun() method will be invoked by the
MultiThreadedTestRunner at runtime, so you shouldn't invoke it in the constructor.
MultiThreadedTestRunner
MultiThreadedTestRunner is a framework that allows for an array of threads to be run asynchronously inside of JUnit. Modeled after an articlewritten by Andy Schneider, this class accepts an array of
TestRunnable instances as parameters in its constructor. Once an instance of this class is built, its
runTestRunnables() method should be invoked to begin the threaded tests.
Unlike a standard JUnit
TestRunner, the
MultiThreadedTestRunner will wait until all threads have terminated to exit. This forces JUnit to wait while the threads do their work, nicely solving our problem from earlier. Let's take a look at how to integrate the GroboUtils API with JUnit.
Writing the Multithreaded Test
The inner class now extends the
net.sourceforge.groboutils.junit.v1.TestRunnablepackage that requires that we override the
runTest()method.); } }
This time, we don't create a worker thread at all. The class
MultiThreadedTestRunner will do this under the hood. Instead of implementing the
run() method, we override
runTest(), which later gets invoked by the
MultiThreadedTestRunner -- we never call it ourselves.
Once the
TestRunnable is defined, we must define our new test case. In our
testExampleThread() method, we instantiate several
TestRunnable objects and add them to an array. After that, we instantiate the
MultiThreadedTestRunner, passing the
TestRunnable array in as a constructor parameter. Now that we have an instance of
MultiThreadedTestRunner, we call its
runTestRunnables() method.
MultiThreadedTestRunner (unlike the
TestRunner in JUnit) will wait for every running thread to expire before continuing on. Also, it creates the worker threads and calls the
start() methods concurrently for each
TestRunnable passed in through its constructor. That means you don't have to jump through the hoops of creating your own threads --
MultiThreadedTestRunner does it for you.
Here's a final version of
ExampleTest:
import junit.framework.*; import net.sourceforge.groboutils.junit.v1.*; public class ExampleTest extends TestCase { private TestRunnable testRunnable;); } } /** * You use the MultiThreadedTestRunner in * your test cases. The MTTR takes an array * of TestRunnable objects as parameters in * its constructor. * * After you have built the MTTR, you run it * with a call to the runTestRunnables() * method. */ public void testExampleThread() throws Throwable { //instantiate the TestRunnable classes TestRunnable tr1, tr2, tr3; tr1 = new DelayedHello("1"); tr2 = new DelayedHello("2"); tr3 = new DelayedHello("3"); //pass that instance to the MTTR TestRunnable[] trs = {tr1, tr2, tr3}; MultiThreadedTestRunner mttr = new MultiThreadedTestRunner(trs); //kickstarts the MTTR & fires off threads mttr.runTestRunnables(); } /** * Standard main() and suite() methods */ public static void main (String[] args) { String[] name = { ExampleTest.class.getName() }; junit.textui.TestRunner.main(name); } public static Test suite() { return new TestSuite(ExampleTest.class); } }
Each thread will feed you back their output between two and five seconds after you fire off the test. Not only will they all show up on time, but they'll show up in a random order, proving concurrency. The unit test won't finish until they're done. With the addition of
MultiThreadedTestRunner, JUnit patiently waits for the
TestRunnables to complete their work before continuing on with the test cases. Optionally, you can set a maximum time allotment for the
MultiThreadedTestRunner to execute (so you don't hang the test on a runaway thread).
To compile and run
ExampleTest you will need bothjunit.jar and GroboUtils-2-core.jar in your classpath. You should see "Delayed Hello World" for each of the threads in some random order as output.
Conclusion
Writing a multithreaded unit test doesn't need to be painful or frustrating (much less impossible). The GroboUtils library provides a clear and simple API for writing multithreaded unit tests. By adding this library to your toolkit, you can extend your unit testing to simulate heavy web traffic and concurrent database transactions, and stress test your synchronized methods.
Have fun!
References
- Erich Gamma and Kent Beck's JUnit Project
- Matt Albrecht's GroboUtils Project
- Mughal, Khalid. Rasmussen, Rolf. A Programmer's Guide to Java Certification, A Comprehensive Primer. Addison-Wesley. Harlow, England. 2000. (272)
- Scheider, Andrew. "JUnit Best Practices, Techniques for Building Resilient, Relocatable, Multithreaded JUnit." JavaWorld. 2000. | https://community.oracle.com/docs/DOC-982943 | CC-MAIN-2016-07 | refinedweb | 1,993 | 57.37 |
Integration Platform Technologies: Siebel Enterprise Application Integration > Web Services > About XML Schema Support for the <xsd:any> Tag >
For the case of the XML Schema Wizard, there is only one possible mapping for the <xsd:any> tag, namely as an integration component.
The <xsd:any> tag can contain an attribute called namespace. If the value for that attribute is known, then one or more integration components or even an integration object can be created. If the value for that attribute is not known, an error will be returned to the user saying that the integration object cannot be created for a weakly typed schema.
The value for the attribute being known refers to the situation of the XML Schema Wizard where a schema of targetNamespace value, being the same as that of the namespace value, has been imported by way of the <xsd:import> tag.
For the case of being known, all the global elements belonging to the particular schema of that targetNamespace will be added in place of the tag. So, one or more integration components can potentially be created.
The mapping of the <xsd:anyAttribute> is similar to that of the <xsd:any> tag. In this case, one or more integration component fields can be created.
The <xsd:anyAttribute> tag has an attribute called namespace. If the namespace value is known (the condition for being known was noted in this section), then all the global attributes for that particular schema will be added in place of this tag. Therefore, one or more integration component fields can potentially be created.
In the case where the namespace value is not known, then an error is returned to the user stating that an integration object cannot be created for a weakly typed schema. | http://docs.oracle.com/cd/B31104_02/books/EAI2/EAI2_WebServices19.html | CC-MAIN-2016-40 | refinedweb | 292 | 57.71 |
Task #20437
Milestone #20350: IOTA BPM deployment
Functional specification document
0%
Description
Collect functional specification from IOTA specialists and document in the Wiki.
History
#1 Updated by John Diamond over 1 year ago
Here are some specs that I received via e-mail from Chip -
I realized that I forgot to send along the naming convention. This is attached, but the general convention for IOTA BPMs is:
N:IBggyx
where gg is the girder, y is L/R/C (left, right, or center), and x is a single character for channel. Unfortunately this prohibits the use of 2-character designators like we've used with the line, but we really would like to stick with the 8-character legacy naming space for the short character designation. What I would propose is the following for 'x':
H = Horizontal position
V = Vertical position
I = Intensity (amplitude)
A/B/C/D = individual button magnitudes
Between the meeting and speaking with Dan, it sounds like we want [0] for each of these to be sampled periodically (15 Hz to allow for the max datapool update?). [1:~100] should be before beam is injected (this can float a little so if one injection gives us 105 empty buckets before injection and the next gives us 99, that's alright as long as we have at least a reasonable sample of the noise floor ahead of injection according to Sasha R. This may only really mean something for the A/B/C/D magnitudes ahead of fitting, but we probably want to be able to correspond the turns without having to remember an offset between them and the fit H/V/I devices. TBT data should fill the buffer following this ([~100:~8000]).
Other parameters can probably be fit into the namespace as needed with remaining alpha-numeric characters, but if you want guidance on any in particular, I can make suggestions.
Naming for the full orbit devices should be N:IBPMx, where x again is H/V/I. Elements for these devices should be in order around the ring from injection ([0:28]): A1C, A2R, A3R, R1R, B1R, B2R, R2R, C1R, C2R, R3R, D1R, D2R, R4R, E1R, E2R, E2L, E1L, R4L, D2L, D1L, R3L, C2L, C1L, R2L, B2L, B1L, R1L, A3L, A2L. While this would be for the periodic BPM sample ([0] for N:BggyH/V/I), it would be handy for our synoptic displays to be able to tell the frontend to load a given TBT frame too. I would propose this to be in a second set of orbit devices N:IBPMTx (x=H/V/I/N, where N would just be the index of the element set you want copied into the H/V/I devices).
Taking all this into account, the BPMs would work as follows:
An event is generated to arm the kickers/BPMs/etc and the BPMs start the TBT buffer. Each BPM device (e.g. N:BA1CH, or the BPM on girder A1, center position on the ring between the left/right symmetry, horizontal position) will be recording the periodic sample of the position into element [0], ~100 elements prior to injection into [1:100], and then all TBT information into the elements after that to fill out the array. The full orbit device will be updated with all the [0] elements for all the BPMs around the ring
(e.g. N:IBPMH0 = N:IBA1CH0, N:IBPMH1 = N:IBA2RH0, N:IBPMH2 = N:IBA3RH0, ... , N:IBPMH28 = N:IBA2LH0). If the operator wants to inspect the orbit at turn ~1000 in the BPM synoptic display, they'll set N:IBPMTN to 1100 on the synoptic display and the BPM front end should take elements [1100] for each of the BPM devices and build up the tbt orbit device (e.g. N:IBPMTH0 = N:IBA1CH1100, N:IBPMTH1 = N:IBA2RH1100, N:IBPMTH2 = N:IBA3RH1100, ... , N:IBPMTH28 = N:IBA2LH1100).
Also available in: Atom PDF | https://cdcvs.fnal.gov/redmine/issues/20437 | CC-MAIN-2020-05 | refinedweb | 651 | 51.31 |
how do I access value from query in Groovy script
How do I get value from the query in the Groovy script?
I can get the values from the Response by using for example:
def driverId = context.expand('${userInfo#Response#$id}')
However, if I want to get the value of the query parameter to-date, in this example, what do I put in ${ }
?
Solved! Go to Solution.
Hi,
Your question is not completely clear.
My take is that you want to 'pull' a value from the Groovy script step and put that value into the 'to-date' parameter for the service call.
The syntax for getting the result from a Groovy script step is.... ${Name of Groovy Script step#result}. The '#result' bit is important, it tells SoapUI to get the returned result from the script. Also ensure your Groovy script has a return statement at the end. E.g.
def someValueToReturn = '17-JAN-2022'; return someValueToReturn;
I would recommend to place the value (2022-02-01) into a TestCase property or some other variable which you can easilly access.
Then you can:
- use its value inside REST Request as e.g. ${#TestCase#toDate}
- inside Groovy script as well (via context expansion or TestCase Java methods)
Using SoapUI Java API from Groovy to reach the query parameter would be also an option, but probably harder to use and maintain.
Best regards,
Karel
Thank you, KarelHusa,
I tried the #testcase#todate solution, didn't quite work for me. However, I discovered a custom properties tab where I can access the values I was looking for without adding too much work for myself. My Groovy script will use the line of code context.expand to access those values using #testStep#propertyname. Thank you for pointing me in the direction of looking for information. | https://community.smartbear.com/t5/SoapUI-Open-Source-Questions/how-do-I-access-value-from-query-in-Groovy-script/m-p/228623 | CC-MAIN-2022-21 | refinedweb | 301 | 64 |
I am a beginner programmer who knows basics of java and C# and now am working with a fun and educational project with few friends on XNA. My goal here is to gain more understanding and start implementing OOP with comfort and as a natural part of OOP programming. Therefore I'm in a spot where I want to create Sprite classes to use in my main code to call those sprites into my application. I have had several problems so I would love to gain a certain ground to be certain that if things that I at the moment know are correct or not. So I would be very glad if you could confirm or disprove my understanding thusfar. Also I would like to mention that I have read quite alot of the tutorials but getting bits and pieces from everything so heres my understanding of what I have gathered without knowing if what I have gathered is correct.
So to begin.
When I make my XNA game project I have
public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; }
I guess the GraphicsDeviceManager implements my graphic settings from my computer. Does the "This" in method arguments can be changed to anything and if then for what reason?
Moving on we have the Initialize method
protected override void Initialize() { // TODO: Add your initialization logic here base.Initialize(); }
I'm quite lost here. I know i can create my new objects there but im kind of clueless about the logic behind it. Short and logical explenation would be of great use here.
Next we have the LoadContent method.
Basically in LoadContent method I can load items from my content that I have added previously to GameContent in my case and then assign those items to certain objects?
Next thing that I dont understand fully is the line:
spriteBatch = new SpriteBatch(GraphicsDevice);
Next theres the Update method.
My questin here is that how many times per second update method runs aprox and where does this number come from?
If I would get answers to those questions then I think I understand that part of the section fully.
Moving on to Classes.
Lets say I make a new class called "Sprite".
Would love to have a confirmation or you know..the opposite of that word to my statements.
When I create a new Class then does classes in XNA behave the same way as for example in Java. So that at first i create all of the neccesary class variables that i need to use on that class( For example Vector2D as its location and Texture2D as its texture ). Next I would create the constructor but as far as i have seen from examples then Constructor in XNA is basically the LoadContent method? In where i give my previously declared variables their values. One thing I cant understand fully there is the cariable ContentManager which I apparantly need to use in the method.
Next having created the new class I would have to create a Draw method for the class right? In which I use theSpriteBatch.Draw(given texture, given position, given color ).
Having created the class I now turn towards my main Game1.cs in my case.
In the start of the program @ public class Game1 I declare that im going to use for example
Sprite man;
Then towards the initialize method ( as said before I'm not totally sure why ) I create the object called man.
man = new Sprite(); // what else can I do in this method with my new sprite?
Next we have the LoadContent method in main hwere i just have to call my given sprite's LoadContent method like this :
man.LoadContent(this.Content, "Man" ) // given that my mans sprite name was Man.png
Now that I have loded the content, all for me to do is Draw it right?
For that in my main Game1.cs class, I turn to the Draw method and just type out :
spriteBatch.Begin(); man.Draw(this.spriteBatch); spriteBatch.End(); // Again.. what is up with the spriteBatches?
I know this is a reaaaly beginner and noobish thread, but there arent really good tutorials online and I still have like a week at least til my book arrives.
If anyone would be awesome enough to help, I would greatly appriciate it!
Cheers
MheQ | http://www.dreamincode.net/forums/topic/292054-xna-beginner-questionsconfirmation-of-knowledge-thusfar/ | CC-MAIN-2017-22 | refinedweb | 721 | 71.85 |
the end of today’s blog post, you will understand how to implement, train, and evaluate a Convolutional Neural Network on your own custom dataset.
And in next week’s post, I’ll be demonstrating how you can take your trained Keras model and deploy it to a smartphone app with just a few lines of code!
To keep the series lighthearted and fun, I am fulfilling a childhood dream of mine and building a Pokedex. A Pokedex is a device that exists in the world of Pokemon, a popular TV show, video game, and trading card series (I was/still am a huge Pokemon fan).
If you are unfamiliar with Pokemon, you should think of a Pokedex as a smartphone app that can recognize Pokemon, the animal-like creatures that exist in the world of Pokemon.
You can swap in your own datasets of course, I’m just having fun and enjoying a bit of childhood nostalgia.
To learn how to train a Convolutional Neural Network with Keras and deep learning on your own custom dataset, just keep reading. Convolutional Neural Network (CNN) on top of the data.
I’ll be showing you how to train your CNN in today’s post using Keras and deep learning. The final part of this series, releasing next week, will demonstrate how you can take your trained Keras model and deploy it to a smartphone (in particular, iPhone) with only a few lines of code.
The end goal of this series is to help you build a fully functional deep learning app — use this series as an inspiration and starting point to help you build your own deep learning applications.
Let’s go ahead and get started training a CNN with Keras and deep learning.
Configuring your development environment.
Our deep learning dataset
Our deep learning dataset consists of 1,191 images of Pokemon, (animal-like creatures that exist in the world of Pokemon, the popular TV show, video game, and trading card series).
Our goal is to train a Convolutional Neural Network using Keras and deep learning to recognize and classify each of these Pokemon.
The Pokemon we will be recognizing include:
- Bulbasaur (234 images)
- Charmander (238 images)
- Squirtle (223 images)
- Pikachu (234 images)
- Mewtwo (239 images)
A montage of the training images for each class can be seen in Figure 1 above.
As you can see, our training images include a mix of:
- Still frames from the TV show and movies
- Trading cards
- Action figures
- Toys and plushes
- Drawings and artistic renderings from fans
This diverse mix of training images will allow our CNN to recognize our five Pokemon classes across a range of images — and as we’ll see, we’ll be able to obtain 82%+ classification accuracy!
The Convolutional Neural Network and Keras project structure
Today’s project has several moving parts — to help us wrap our head around the project, let’s start by reviewing our directory structure for the project:
├── dataset │ ├── bulbasaur [234 entries] │ ├── charmander [238 entries] │ ├── mewtwo [239 entries] │ ├── pikachu [234 entries] │ └── squirtle [223 entries] ├── examples [6 entries] ├── pyimagesearch │ ├── __init__.py │ └── smallervggnet.py ├── plot.png ├── lb.pickle ├── pokedex.model ├── classify.py └── train.py
There are 3 directories:
dataset: Contains the five classes, each class is its own respective subdirectory to make parsing class labels easy.
examples: Contains images we’ll be using to test our CNN.
- The
pyimagesearchmodule: Contains our
SmallerVGGNetmodel class (which we’ll be implementing later in this post).
And 5 files in the root:
plot.png: Our training/testing accuracy and loss plot which is generated after the training script is ran.
lb.pickle: Our
LabelBinarizerserialized object file — this contains a class index to class name lookup mechamisn.
pokedex.model: This is our serialized Keras Convolutional Neural Network model file (i.e., the “weights file”).
train.py: We will use this script to train our Keras CNN, plot the accuracy/loss, and then serialize the CNN and label binarizer to disk.
classify.py: Our testing script.
Our Keras and CNN architecture
The CNN architecture we will be utilizing today is a smaller, more compact variant of the VGGNet network, introduced by Simonyan and Zisserman in their 2014 paper, Very Deep Convolutional Networks for Large Scale Image Recognition.
VGGNet-like architectures are characterized by:
- Using only 3×3 convolutional layers stacked on top of each other in increasing depth
- Reducing volume size by max pooling
- Fully-connected layers at the end of the network prior to a softmax classifier
I assume you already have Keras installed and configured on your system. If not, here are a few links to deep learning development environment configuration tutorials I have put together:
- Configuring Ubuntu for deep learning with Python
- Setting up Ubuntu 16.04 + CUDA + GPU for deep learning with Python
- Configuring macOS for deep learning with Python
If you want to skip configuring your deep learning environment, I would recommend using one of the following pre-configured instances in the cloud:
- Amazon AMI for deep learning with Python
- Microsoft’s data science virtual machine (DSVM) for deep learning
Let’s go ahead and implement
SmallerVGGNet , our smaller version of VGGNet. Create a new file named
smallervggnet.py inside the
pyimagesearch module
First we import our modules — notice that they all come from Keras. Each of these are covered extensively throughout the course of reading Deep Learning for Computer Vision with Python.
Note: You’ll also want to create an
__init__.py file inside
pyimagesearch so Python knows the directory is a module. If you’re unfamiliar with
__init__.py files or how they are used to create modules, no worries, just use the “Downloads” section at the end of this blog post to download my directory structure, source code, and dataset + example images.
From there, we define our
SmallerVGGNet class:
class SmallerV build method requires four parameters:
width: The image width dimension.
height: The image height dimension.
depth: The depth of the image — also known as the number of channels.
classes: The number of classes in our dataset (which will affect the last layer of our model). We’re utilizing 5 Pokemon classes in this post, but don’t forget that you could work with the 807 Pokemon species if you downloaded enough example images for each species!
Note: We’ll be working with input images that are
96 x 96 with a depth of
3 (as we’ll see later in this post). Keep this in mind as we explain the spatial dimensions of the input volume as it passes through the network.
Since we’re using the TensorFlow backend, we arrange the input shape with “channels last” data ordering, but if you want to use “channels first” (Theano, etc.) then it is handled automagically on Lines 23-25.
Now, let’s start adding layers to our model:
# CONV => RELU => POOL model.add(Conv2D(32, (3, 3), padding="same", input_shape=inputShape)) model.add(Activation("relu")) model.add(BatchNormalization(axis=chanDim)) model.add(MaxPooling2D(pool_size=(3, 3))) model.add(Dropout(0.25))
Above is our first
CONV => RELU => POOL block.
The convolution layer has
32 filters with a
3 x 3 kernel. We’re using
RELU the activation function followed by batch normalization.
Our
POOL layer uses a
3 x 3
POOL size to reduce spatial dimensions quickly from
96 x 96 to
32 x 32 (we’ll be using
96 x 96 x 3 input images to train our network as we’ll see in the next section).
As you can see from the code block, we’ll also be utilizing dropout in our network architecture. Dropout works by randomly disconnecting nodes from the current layer to the next layer. This process of random disconnects during training batches helps naturally introduce redundancy into the model — no one single node in the layer is responsible for predicting a certain class, object, edge, or corner.
From there we’ll add
(CONV => RELU) * 2 layers before applying another
POOL layer:
# (CONV => RELU) * 2 => POOL))
Stacking multiple
CONV and
RELU layers together (prior to reducing the spatial dimensions of the volume) allows us to learn a richer set of features.
Notice how:
- We’re increasing our filter size from
32to
64. The deeper we go in the network, the smaller the spatial dimensions of our volume, and the more filters we learn.
- We decreased how max pooling size from
3 x 3to
2 x 2to ensure we do not reduce our spatial dimensions too quickly.
Dropout is again performed at this stage.
Let’s add another set of
(CONV => RELU) * 2 => POOL :
# (CONV => RELU) * 2 => POOL))
Notice that we’ve increased our filter size to
128 here. Dropout of 25% of the nodes is performed to reduce overfitting again.
And finally, we have a set of
FC => RELU layers and a softmax classifier:
# first (and only) set of FC => RELU layers model.add(Flatten()) model.add(Dense(1024)) model.add(Activation("relu")) model.add(BatchNormalization()) model.add(Dropout(0.5)) # softmax classifier model.add(Dense(classes)) model.add(Activation("softmax")) # return the constructed network architecture return model
The fully connected layer is specified by
Dense(1024) with a rectified linear unit activation and batch normalization.
Dropout is performed a final time — this time notice that we’re dropping out 50% of the nodes during training. Typically you’ll use a dropout of 40-50% in our fully-connected layers and a dropout with much lower rate, normally 10-25% in previous layers (if any dropout is applied at all).
We round out the model with a softmax classifier that will return the predicted probabilities for each class label.
A visualization of the network architecture of first few layers of
SmallerVGGNet can be seen in Figure 2 at the top of this section. To see the full resolution of our Keras CNN implementation of
SmallerVGGNet , refer to the following link.
Implementing our CNN + Keras training script
Now that
SmallerVGGNet is implemented, we can train our Convolutional Neural Network using Keras.
Open up a new file, name it
train.py , and insert the following code where we’ll import our required packages and libraries:
# set the matplotlib backend so figures can be saved in the background import matplotlib matplotlib.use("Agg") # import the necessary packages from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.optimizers import Adam from tensorflow.keras.preprocessing.image import img_to_array from sklearn.preprocessing import LabelBinarizer from sklearn.model_selection import train_test_split from pyimagesearch.smallervggnet import SmallerVGGNet import matplotlib.pyplot as plt from imutils import paths import numpy as np import argparse import random import pickle import cv2 import os
We are going to use the
"Agg" matplotlib backend so that figures can be saved in the background (Line 3).
The
ImageDataGenerator class will be used for data augmentation, a technique used to take existing images in our dataset and apply random transformations (rotations, shearing, etc.) to generate additional training data. Data augmentation helps prevent overfitting.
Line 7 imports the
Adam optimizer, the optimizer method used to train our network.
The
LabelBinarizer (Line 9) is an important class to note — this class will enable us to:
- Input a set of class labels (i.e., strings representing the human-readable class labels in our dataset).
- Transform our class labels into one-hot encoded vectors.
- Allow us to take an integer class label prediction from our Keras CNN and transform it back into a human-readable label.
I often get asked hereon the PyImageSearch blog how we can transform a class label string to an integer and vice versa. Now you know the solution is to use the
LabelBinarizer class.
The
train_test_split function (Line 10) will be used to create our training and testing splits. Also take note of our
SmallerVGGNet import on Line 11 — this is the Keras CNN we just implemented in the previous section.
Readers of this blog are familiar with my very own imutils package. If you don’t have it installed/updated, you can install it via:
$ pip install --upgrade imutils
If you are using a Python virtual environment (as we typically do here on the PyImageSearch blog), make sure you use the
workon command to access your particular virtual environment before installing/upgrading
imutils .
From there, let’s parse our command line arguments:
# construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-d", "--dataset", required=True, help="path to input dataset (i.e., directory of images)") ap.add_argument("-m", "--model", required=True, help="path to output model") ap.add_argument("-l", "--labelbin", required=True, help="path to output label binarizer") ap.add_argument("-p", "--plot", type=str, default="plot.png", help="path to output accuracy/loss plot") args = vars(ap.parse_args())
For our training script, we need to supply three required command line arguments:
--dataset: The path to the input dataset. Our dataset is organized in a
datasetdirectory with subdirectories representing each class. Inside each subdirectory is ~250 Pokemon images. See the project directory structure at the top of this post for more details.
--model: The path to the output model — this training script will train the model and output it to disk.
--labelbin: The path to the output label binarizer — as you’ll see shortly, we’ll extract the class labels from the dataset directory names and build the label binarizer.
We also have one optional argument,
--plot . If you don’t specify a path/filename, then a
plot.png file will be placed in the current working directory.
You do not need to modify Lines 22-31 to supply new file paths. The command line arguments are handled at runtime. If this doesn’t make sense to you, be sure to review my command line arguments blog post.
Now that we’ve taken care of our command line arguments, let’s initialize some important variables:
# initialize the number of epochs to train for, initial learning rate, # batch size, and image dimensions EPOCHS = 100 INIT_LR = 1e-3 BS = 32 IMAGE_DIMS = (96, 96, 3) # initialize the data and labels data = [] labels = [] # grab the image paths and randomly shuffle them print("[INFO] loading images...") imagePaths = sorted(list(paths.list_images(args["dataset"]))) random.seed(42) random.shuffle(imagePaths)
Lines 35-38 initialize important variables used when training our Keras CNN:
EPOCHS:The total number of epochs we will be training our network for (i.e., how many times our network “sees” each training example and learns patterns from it).
INIT_LR:The initial learning rate — a value of 1e-3 is the default value for the Adam optimizer, the optimizer we will be using to train the network.
BS:We will be passing batches of images into our network for training. There are multiple batches per epoch. The
BSvalue controls the batch size.
IMAGE_DIMS:Here we supply the spatial dimensions of our input images. We’ll require our input images to be
96 x 96pixels with
3channels (i.e., RGB). I’ll also note that we specifically designed SmallerVGGNet with
96 x 96images in mind.
We also initialize two lists —
data and
labels which will hold the preprocessed images and labels, respectively.
Lines 46-48 grab all of the image paths and randomly shuffle them.
And from there, we’ll loop over each of those
imagePaths :
# loop over the input images for imagePath in imagePaths: # load the image, pre-process it, and store it in the data list image = cv2.imread(imagePath) image = cv2.resize(image, (IMAGE_DIMS[1], IMAGE_DIMS[0])) image = img_to_array(image) data.append(image) # extract the class label from the image path and update the # labels list label = imagePath.split(os.path.sep)[-2] labels.append(label)
We loop over the
imagePaths on Line 51 and then proceed to load the image (Line 53) and resize it to accommodate our model (Line 54).
Now it’s time to update our
data and
labels lists.
We call the Keras
img_to_array function to convert the image to a Keras-compatible array (Line 55) followed by appending the image to our list called
data (Line 56).
For our
labels list, we extract the
label from the file path on Line 60 and append it (the label) on Line 61.
So, why does this class label parsing process work?
Consider that fact that we purposely created our dataset directory structure to have the following format:
dataset/{CLASS_LABEL}/{FILENAME}.jpg
Using the path separator on Line 60 we can split the path into an array and then grab the second-to-last entry in the list — the class label.
If this process seems confusing to you, I would encourage you to open up a Python shell and explore an example
imagePath by splitting the path on your operating system’s respective path separator.
Let’s keep moving. A few things are happening in this next code block — additional preprocessing, binarizing labels, and partitioning the data:
# scale the raw pixel intensities to the range [0, 1] data = np.array(data, dtype="float") / 255.0 labels = np.array(labels) print("[INFO] data matrix: {:.2f}MB".format( data.nbytes / (1024 * 1000.0))) # binarize the labels lb = LabelBinarizer() labels = lb.fit_transform(labels) # partition the data into training and testing splits using 80% of # the data for training and the remaining 20% for testing (trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=0.2, random_state=42)
Here we first convert the
data array to a NumPy array and then scale the pixel intensities to the range
[0, 1] (Line 64). We also convert the
labels from a list to a NumPy array on Line 65. An info message is printed which shows the size (in MB) of the
data matrix.
Then, we binarize the labels utilizing scikit-learn’s
LabelBinarizer (Lines 70 and 71).
With deep learning, or any machine learning for that matter, a common practice is to make a training and testing split. This is handled on Lines 75 and 76 where we create an 80/20 random split of the data.
Next, let’s create our image data augmentation object:
# construct the image generator for data augmentation aug = ImageDataGenerator(rotation_range=25, width_shift_range=0.1, height_shift_range=0.1, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode="nearest")
Since we’re working with a limited amount of data points (< 250 images per class), we can make use of data augmentation during the training process to give our model more images (based on existing images) to train with.
Data Augmentation is a tool that should be in every deep learning practitioner’s toolbox. I cover data augmentation in the Practitioner Bundle of Deep Learning for Computer Vision with Python.
We initialize aug, our
ImageDataGenerator , on Lines 79-81.
From there, let’s compile the model and kick off the training:
# initialize the model print("[INFO] compiling model...") model = SmallerVGGNet.build(width=IMAGE_DIMS[1], height=IMAGE_DIMS[0], depth=IMAGE_DIMS[2], classes=len(lb.classes_)) opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS) model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"]) # train the network print("[INFO] training network...") H = model.fit( x=aug.flow(trainX, trainY, batch_size=BS), validation_data=(testX, testY), steps_per_epoch=len(trainX) // BS, epochs=EPOCHS, verbose=1)
On Lines 85 and 86, we initialize our Keras CNN model with
96 x 96 x 3 input spatial dimensions. I’ll state this again as I receive this question often — SmallerVGGNet was designed to accept
96 x 96 x 3 input images. If you want to use different spatial dimensions you may need to either:
- Reduce the depth of the network for smaller images
- Increase the depth of the network for larger images
Do not go blindly editing the code. Consider the implications larger or smaller images will have first!
We’re going to use the
Adam optimizer with learning rate decay (Line 87) and then
compile our
model with categorical cross-entropy since we have > 2 classes (Lines 88 and 89).
Note: For only two classes you should use binary cross-entropy as the loss.
From there, we make a call to the Keras
fit method to train the network (Lines 93-97). Be patient — this can take some time depending on whether you are training using a CPU or a GPU..
Once our Keras CNN has finished training, we’ll want to save both the (1) model and (2) label binarizer as we’ll need to load them from disk when we test the network on images outside of our training/testing set:
# save the model to disk print("[INFO] serializing network...") model.save(args["model"], save_format="h5") # save the label binarizer to disk print("[INFO] serializing label binarizer...") f = open(args["labelbin"], "wb") f.write(pickle.dumps(lb)) f.close()
We serialize the model (Line 101) and the label binarizer (Lines 105-107) so we can easily use them later in our
classify.py script.
The label binarizer file contains the class index to human-readable class label dictionary. This object ensures we don’t have to hardcode our class labels in scripts that wish to use our Keras CNN.
Finally, we can plot our training and loss accuracy:
# plot the training loss and accuracy plt.style.use("ggplot") plt.figure() N = EPOCHS") plt.xlabel("Epoch #") plt.ylabel("Loss/Accuracy") plt.legend(loc="upper left").
I elected to save my plot to disk (Line 121) rather than displaying it for two reasons: (1) I’m on a headless server in the cloud and (2) I wanted to make sure I don’t forget to save the plot.
Training our CNN with Keras
Now we’re ready to train our Pokedex CNN.
Be sure to visit the “Downloads” section of this blog post to download code + data.
Then execute the following command to train the mode; while making sure to provide the command line arguments properly:
$ python train.py --dataset dataset --model pokedex.model --labelbin lb.pickle Using TensorFlow backend. [INFO] loading images... [INFO] data matrix: 252.07MB [INFO] compiling model... [INFO] training network... Train for 29 steps, validate on 234 samples Epoch 1/100 29/29 [==============================] - 7s 237ms/step - loss: 1.4218 - accuracy: 0.6271 - val_loss: 1.9534 - val_accuracy: 0.2436 Epoch 2/100 29/29 [==============================] - 6s 208ms/step - loss: 0.7470 - accuracy: 0.7703 - val_loss: 2.7184 - val_accuracy: 0.3632 Epoch 3/100 29/29 [==============================] - 6s 207ms/step - loss: 0.5928 - accuracy: 0.8080 - val_loss: 2.8207 - val_accuracy: 0.2436 ... 29/29 [==============================] - 6s 208ms/step - loss: 0.2108 - accuracy: 0.9423 - val_loss: 1.7813 - val_accuracy: 0.8248 Epoch 98/100 29/29 [==============================] - 6s 208ms/step - loss: 0.1170 - accuracy: 0.9645 - val_loss: 2.2405 - val_accuracy: 0.7265 Epoch 99/100 29/29 [==============================] - 6s 208ms/step - loss: 0.0961 - accuracy: 0.9689 - val_loss: 1.2761 - val_accuracy: 0.8333 Epoch 100/100 29/29 [==============================] - 6s 207ms/step - loss: 0.0449 - accuracy: 0.9834 - val_loss: 1.1710 - val_accuracy: 0.8291 [INFO] serializing network... [INFO] serializing label binarizer...
Looking at the output of our training script we see that our Keras CNN obtained:
- 98.34% classification accuracy on the training set
- And 82.91% accuracy on the testing set
The training loss/accuracy plot follows:
As you can see in Figure 3, I trained the model for 100 epochs and achieved low loss with limited overfitting. With additional training data we could obtain higher accuracy as well.
Creating our CNN and Keras testing script
Now that our CNN is trained, we need to implement a script to classify images that are not part of our training or validation/testing set. Open up a new file, name it
classify.py , and insert the following code:
# import the necessary packages from tensorflow.keras.preprocessing.image import img_to_array from tensorflow.keras.models import load_model import numpy as np import argparse import imutils import pickle import cv2 import os
First we import the necessary packages (Lines 2-9).
From there, let’s parse command line arguments:
# construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-m", "--model", required=True, help="path to trained model model") ap.add_argument("-l", "--labelbin", required=True, help="path to label binarizer") ap.add_argument("-i", "--image", required=True, help="path to input image") args = vars(ap.parse_args())
We’ve have three required command line arguments we need to parse:
--model: The path to the model that we just trained.
--labelbin: The path to the label binarizer file.
--image: Our input image file path.
Each of these arguments is established and parsed on Lines 12-19. Remember, you don’t need to modify these lines — I’ll show you how to run the program in the next section using the command line arguments provided at runtime.
Next, we’ll load and preprocess the image:
# load the image image = cv2.imread(args["image"]) output = image.copy() # pre-process the image for classification image = cv2.resize(image, (96, 96)) image = image.astype("float") / 255.0 image = img_to_array(image) image = np.expand_dims(image, axis=0)
Here we load the input
image (Line 22) and make a copy called
output for display purposes (Line 23).
Then we preprocess the
image in the exact same manner that we did for training (Lines 26-29).
From there, let’s load the model + label binarizer and then classify the image:
# load the trained convolutional neural network and the label # binarizer print("[INFO] loading network...") model = load_model(args["model"]) lb = pickle.loads(open(args["labelbin"], "rb").read()) # classify the input image print("[INFO] classifying image...") proba = model.predict(image)[0] idx = np.argmax(proba) label = lb.classes_[idx]
In order to classify the image, we need the
model and label binarizer in memory. We load both on Lines 34 and 35.
Subsequently, we classify the
image and create the
label (Lines 39-41).
The remaining code block is for display purposes:
# we'll mark our prediction as "correct" of the input image filename # contains the predicted label text (obviously this makes the # assumption that you have named your testing image files this way) filename = args["image"][args["image"].rfind(os.path.sep) + 1:] correct = "correct" if filename.rfind(label) != -1 else "incorrect" # build the label and draw the label on the image label = "{}: {:.2f}% ({})".format(label, proba[idx] * 100, correct) output = imutils.resize(output, width=400) cv2.putText(output, label, (10, 25), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2) # show the output image print("[INFO] {}".format(label)) cv2.imshow("Output", output) cv2.waitKey(0)
On Lines 46 and 47, we’re extracting the name of the Pokemon from the
filename and comparing it to the
label . The
correct variable will be either
"correct" or
"incorrect" based on this. Obviously these two lines make the assumption that your input image has a filename that contains the true label.
From there we take the following steps:
- Append the probability percentage and
"correct"/
"incorrect"text to the class
label(Line 50).
- Resize the
outputimage so it fits our screen (Line 51).
- Draw the
labeltext on the
outputimage (Lines 52 and 53).
- Display the
outputimage and wait for a keypress to exit (Lines 57 and 58).
Classifying images with our CNN and Keras
We’re now ready to run the
classify.py script!
Ensure that you’ve grabbed the code + images from the “Downloads” section at the bottom of this post.
Once you’ve downloaded and unzipped the archive change into the root directory of this project and follow along starting with an image of Charmander. Notice that we’ve provided three command line arguments in order to run the script:
$ python classify.py --model pokedex.model --labelbin lb.pickle \ --image examples/charmander_counter.png Using TensorFlow backend. [INFO] loading network... [INFO] classifying image... [INFO] charmander: 85.42% (correct)
And now let’s query our model with the loyal and fierce Bulbasaur stuffed Pokemon:
$ python classify.py --model pokedex.model --labelbin lb.pickle \ --image examples/bulbasaur_plush.png Using TensorFlow backend. [INFO] loading network... [INFO] classifying image... [INFO] bulbasaur: 99.61% (correct)
Let’s try a toy action figure of Mewtwo (a genetically engineered Pokemon):
$ python classify.py --model pokedex.model --labelbin lb.pickle \ --image examples/mewtwo_toy.png Using TensorFlow backend. [INFO] loading network... [INFO] classifying image... [INFO] mewtwo: 81.52% (correct)
What would an example Pokedex be if it couldn’t recognize the infamous Pikachu:
$ python classify.py --model pokedex.model --labelbin lb.pickle \ --image examples/pikachu_toy.png Using TensorFlow backend. [INFO] loading network... [INFO] classifying image... [INFO] pikachu: 100.00% (correct)
Let’s try the cute Squirtle Pokemon:
$ python classify.py --model pokedex.model --labelbin lb.pickle \ --image examples/squirtle_plush.png Using TensorFlow backend. [INFO] loading network... [INFO] classifying image... [INFO] squirtle: 99.96% (correct)
And last but not least, let’s classify my fire-tailed Charmander again. This time he is being shy and is partially occluded by my monitor.
$ python classify.py --model pokedex.model --labelbin lb.pickle \ --image examples/charmander_hidden.png Using TensorFlow backend. [INFO] loading network... [INFO] classifying image... [INFO] charmander: 98.78% (correct)
Each of these Pokemons were no match for my new Pokedex.
Currently, there are around 807 different species of Pokemon. Our classifier was trained on only five different Pokemon (for the sake of simplicity).
If you’re looking to train a classifier to recognize more Pokemon for a bigger Pokedex, you’ll need additional training images for each class. Ideally, your goal should be to have 500-1,000 images per class you wish to recognize.
To acquire training images, I suggest that you look no further than Microsoft Bing’s Image Search API. This API is hands down easier to use than the previous hack of Google Image Search that I shared (but that would work too).
Limitations of this model
One of the primary limitations of this model is the small amount of training data. I tested on various images and at times the classifications were incorrect. When this happened, I examined the input image + network more closely and found that the color(s) most dominant in the image influence the classification dramatically.
For example, lots of red and oranges in an image will likely return “Charmander” as the label. Similarly, lots of yellows in an image will normally result in a “Pikachu” label.
This is partially due to our input data. Pokemon are obviously fictitious so there no actual “real-world” images of them (other than the action figures and toy plushes).
Most of our images came from either fan illustrations or stills from the movie/TV show. And furthermore, we only had a limited amount of data for each class (~225-250 images).
Ideally, we should have at least 500-1,000 images per class when training a Convolutional Neural Network. Keep this in mind when working with your own data.
Can we use this Keras deep learning model as a REST API?
If you would like to run this model (or any other deep learning model) as a REST API, I wrote three blog posts to help you get started:
- Building a simple Keras + deep learning REST API (Keras.io guest post)
- A scalable Keras + deep learning REST API
- Deep learning in production with Keras, Redis, Flask, and Apache
Summary
In today’s blog post you learned how to train a Convolutional Neural Network (CNN) using the Keras deep learning library.
Our dataset was gathered using the procedure discussed in last week’s blog post.
In particular, our dataset consists of 1,191 images of five separate Pokemon (animal-like creatures that exist in the world of Pokemon, the popular TV show, video game, and trading card series).
Using our Convolutional Neural Network and Keras, we were able to obtain 82% accuracy, which is quite respectable given (1) the limited size of our dataset and (2) the number of parameters in our network.
In next week’s blog post I’ll be demonstrating how we can:
- Take our trained Keras + Convolutional Neural Network model…
- …and deploy it to a smartphone with only a few lines of code!
It’s going to be a great post, don’t miss it!
To download the source code to this post (and be notified when next week’s can’t miss post goes live), just enter your email address in the form below!
!
324 responses to: Keras and Convolutional Neural Networks (CNNs)
Brilliant Post as usual.Thanks for sharing your knowledge.
Thanks Anirban!
thanks.
Hello Adrian You are as distinct as usual
I have touched something very important that stops too many people
He wonders how to train a nervous network of my own
And how to use cnn resnet models
Thank you very much for your efforts in pushing people seriously forward
I had a question about something to stop me and excuse me for this
I was asking how I was implementing a gradual training for my model
For example, I had a picture base for about 100 objects
Each object has 10,000 pictures
A model was built for this data
When I collect more pictures I want to add them to my model
Here I have to add pictures to the photo collection and then training again on all old and new photos?
As everyone knows, this needs too much time.
I learned about the incremental training but I do not know how to use it in practice
Using any method (caffe or keras or etc)
I hope you will give me a place to help me with the solution
Thank you Adrian
Hi Mohamed — you could technically train from scratch but this would likely be a waste of resources each and every time you add new images. I would suggest a hybrid approach where you:
1. Apply fine-tuning to the network, perhaps on a weekly or monthly basis
2. Only re-train from scratch once every 3-6 months
The timeframes should be changed based on how often new images are added of course so you would need to change them to whatever is appropriate for your project. I also cover how to fine-tune a network inside Deep Learning for Computer Vision with Python.
Thank you very much Adrian for your response
I really benefited a lot from you
Always forward
Thank you
if i want to split my dataset into train, test and validation, what is the good method to do that? not only splitting dataset into train and test only.
Thank you very much
You would use scikit-learn’s
train_test_splitfunction twice. The first time you split the data into two splits: training and testing.
You then split a second time on the training data, creating another two splits: training and validation.
This process will leave you with three splits: training, testing, and validation.
Thank you, that is really helpful.
now i want to try top-5 accuracy, do you know how to do that?
I discuss rank-5 accuracy, including how to compute it, inside Deep Learning for Computer Vision with Python.
The gist is that you need to:
1. Loop over each of your test data points
2. Predict the class labels for it
3. Sort labels by their probability in descending order
4. Check to see if ground-truth label exists in the top 5 predicted labels
Refer to Deep Learning for Computer Vision with Python for more details, including implementation.
nice post Adrian!!!, while running , I have got this error , “error: the following arguments are required: -d/–dataset, -m/–model, -l/–labelbin “, Plz help me in this..
You need to supply the command line arguments to the Python script. Make sure you read this tutorial to help get you started.
Update:
I tried to do the same on 5 actresses. I got 44% accuracy on the validation and above 80% on the main group.
I have ~280 pictures for each actress.
How to increase the accuracy?
1. increase the number of pictures
2. try to find the face and work on it as ROI
Do you have other ideas? maybe play with the training parameters (alpha)?
When performing face recognition you need to:
1. Detect the face and extract the face ROI
2. Classify the face
Training a network to recognize faces on an entire image is not going to work well at all.
This dataset looks smaller than MNIST! I thing you should rather teach us how to work with real world data, where there a lot of classes, and the data is much more imbalanced.
I discuss how to gather your own training data in a previous post. The post you are commenting on is meant to be an introduction to Keras and CNNs. If you want an advanced treatment of the material with real-world data I would kindly refer you to my book, Deep Learning for Computer Vision with Python, where I have over 900+ pages worth of content on training deep neural networks on real-world data.
As always a really great post!
I was wondering if it’s possible to classify several objects in a picture (an image with several pokemons in it?) kinda like in one of your other great posts, using the models I train using Keras?
Thank you so much for an awesome post
Hey Jesper — I’ll be writing a blog post on how and when you can use a CNN trained for image classification for object detection. The answer is too long to include in a comment as there is a lot to explain including when/where it’s possible. The post will be publishing on/around May 14th so keep an eye out for it.
You are the superman of so many things – thanks also for the distinction between image classification and object detection. These blogs are so good!
Thanks again
Thank you Jesper, I really appreciate that 🙂
Hi Adrian, thank you for the great explanation in detail. During my computer vision course we were given 2 projects and I have used a lot of algorithms from your website. In the last project it is not required to use Deep-learning but I went for it anyways as a bonus, and i’m using your pokedex code.
Thanks!
Nice! Best of luck with the project Sean. I hope it goes well.
Good job as usual Adrian. I learned so much from this blog series!
Thank you, Michael! Believe it or not, the series only gets better from here 🙂
Hi, I loved this post and found it really useful as a beginner learning about CNN’s.
Although I was getting a “memory error” at this step:
data = np.array(data, dtype=”float”) / 255.0
Actually, I added around 5k images to “data” and have around 13 classes… but clearly it is not working in this case… could you suggest anything to tackle this issue…
Your system does not have enough memory to store all images in RAM. You can either:
1. Update the code to use a data generator and augmentor that loads images from disk in small batches
2. Build a serialized dataset, such as HDF5 format, and loop over the images in batches
If you’re working with an image dataset too large to fit into main memory I would suggest reading through Deep Learning for Computer Vision with Python where I discuss my best practices and techniques to efficiently train your networks (code is included, of course).
hi adrian. how can I use this network to select the object in the image, such as the face.
Hi Alex — what do you mean by “select”? Can you clarify? Perhaps you are referring to object detection or face detection?
how do I use my trained model for object detection
Object detection
You cannot use this exact model for object detection. Deep learning object detectors fall into various frameworks such as Faster R-CNN, Single Shot Detectors (SSDs), YOLO, and others. I cover them in detail inside Deep Learning for Computer Vision with Python where I also demonstrate how to train your own custom deep learning object detectors. Be sure to take a look.
I’ll also have a blog post coming out in early May that will help discuss the differences between object detection and image classification. This has become a common question on the PyImageSearch blog.
Finally, if you are specifically interested in face detection, refer to this blog post.
Hi Adrian,
did you try to use CNN for iris recognition?
Thanks for great post.
Hi Bostjan — the iris of the eye? I have not used CNNs for iris recognition.
Hi Adrian
I got this error before starting training
Using TensorFlow backend.
[INFO] loading images…
libpng warning: Incorrect bKGD chunk length
[INFO] data matrix: 252.07MB
[INFO] compiling model.
can you clarify this for me?
moreover, for the val_loss, after about 10 epochs it hit high loss number and get back to normal
thanks
This is not an error, it’s just a warning that the libpng library when it tried to load a specific image from disk. It can be safely ignored.
Thanks A lot Adrian for sharing the informative knowledge <<
by the way, can i use this model for one classification only?
I’m not sure what you mean by “one classification only” — could you clarify?
for example, i want to detect only cats , so inside dataset folder i will have only cats folder
To train a model you need at least two classes. If you want to detect only cats you should create a separate “background” or “ignore” class that consists of random (typically “natural scene”) images that do not contain cats. You can then train your model to predict “cat” or “background”.
Hi Adrian,
I would like to know how to set class weights for imbalanced classes in Keras.
I remember I read it in DL4CV but I can’t find it.
Can you point me to the chapter?
Thx,
G
Hi Gilad — the chapter you are referring to is the “Smile Detection” chapter of the Starter Bundle.
Very neat article, though I think there is still something to be said about Pokemon (and children’s media in general) being pre-engineered to be easily identifiable.
Musing about a real-life equivalent, many esteemed researchers argue over which animals belong is which categories.
I would be interesting to see a neural net which classifies animals among say, the order of ungulates.
Really cool and great work! About to start on some hobby work involving Keras and OpenCV installed in Blender environment.
Wish me luck!
Hi Adrian,
Thanks for your great post. I want to detect more than one object and draw rectangle around them. How can i modify code?
Classification models cannot be directly used for object detection. You would need a deep learning object detection framework such as Faster R-CNN, SSD, or YOLO. I cover them inside Deep Learning for Computer Vision with Python.
Amazing post. Really helpful for my project. Eagerly awaiting your next post.
Hi Akshay — you can find the Keras + iOS + CoreML post here
Hey can you also make a tutorial for object detection using keras..
I cover deep learning object detection inside Deep Learning for Computer Vision with Python.
Sir,
I have your 3 books. Could you please tell me where is the chapter that covers deep learning object detection.
The “ImageNet Bundle” and “Bonus Bundle” both cover deep learning object detection.
Adrian a great post, something I have been looking forward to. How would you save the Keras Model in a h5 format.?
If you call the
savemethod of a
modelit will write it to disk in a serialized HDF5 format.
# scale the raw pixel intensities to the range [0, 1]
data = np.array(data, dtype=”float”) / 255.0
labels = np.array(labels)
when i’m doing scaling my own data set on size 224 x 224 i got memory error, but the error not occurred if i used size 128 x 128.
How to solve that error? i need to use the data set with size 224 x 224
thank you very much,
Your system is running out of RAM. Your entire dataset cannot fit into RAM. You can either (1) install more RAM on your system or (2) use a combination of lazy loading data generators from disk or use a serialized dataset, such an HDF5 file. I demonstrate how to do both inside Deep Learning for Computer Vision with Python.
Ran this on your deep-learning-for-computer-vision AMI on AWS using a c4.2xlarge (the c4.xlarge instance type gave ALLOC errors, out of memory?) instance type and got the following
[INFO] serializing label binarizer…
Exception ignored in: <bound method BaseSession.__del__ of >
Traceback (most recent call last):
File “/home/ubuntu/.virtualenvs/dl4cv/lib/python3.5/site-packages/tensorflow/python/client/session.py”, line 701, in __del__
TypeError: ‘NoneType’ object is not callable
This is a problem with the TensorFlow engine shutting down properly. It will only happen sporadically and since it only happens during termination of the script it can be safely ignored.
Hi Adrian,
Thanks a lot for such a wonderful post. I am doing my project somewhat similar to this. But in my dataset, I have only two Labels.
One is background and in another different person with the background. I want to detect the presence of these people i.e i want to classify images into presence or absence (based on the presence of a person). But images in my dataset are of size 1092 X 1048 pixels. I have resized them to 512 X 512 using cv2.resize() function.
My question is can I use this same model for the training. If not, how can I decide the model suitable for this case? I believe I have to use a deeper network because the size of images used is much large.
Thanks.
Instead of training your model from scratch is there a reason you wouldn’t use existing deep learning networks that are trained to perform person detection? Secondly, if you apply face detection using Haar cascades or HOG + Linear SVM you may be able to skip using deep learning entirely.
Depending on your input images, in particular how large, in pixels, the person is in the image, you may need to play around with larger input image dimensions — it’s hard to say which one will work best without seeing your data.
Great post! I went through this exercise with 250 images of water bottles, 250 of tennis balls, and 60 of dog poop. Yes dog poop. There’s a story in there for later. Anyway, it classifies anything that looks like any of the three classes as dog poop and one image of a tree as a tennis ball with 50% confidence. Most of the images are fairly well cropped. The failures on water bottles and tennis balls really surprise me. Is it likely that I just don’t have enough samples of the dog poop class?
You may not have enough examples of the dog poop class but you may also want to compute the class weights to handle the imbalance.
Ran this code on AWS running a c4.2xlarge instance. No problems. Messed up first time using the wrong AMI image, Version 1.2 is required. I am running this again now using bee images obtained using the bing image search as outlined by you Adrian, about 11000+ images with 35 classes. I suspect I may need to run this on a GPU instance, only time will tell.
Congrats on getting up and running with your dataset and network! For 11,000 images I would likely suggest a GPU instance, but that really depends on which model architecture you are using.
You are quite right. Do not have the time or budget to use CPU only. Even using just a single GPU gives a ten times reduction in the time to produce the model, that is using a p2.xlarge.
So now I am going to look at the Microsoft offering and see how it fairs.
That is bee’s as in honey bees
Adrian,
thanks for your great work. These posts are extremely helpful.
That said, I do have a question and wonder if you can help. I’m running a paperspace P5000 instance w/ 16GB GPU memory and 30 GB general memory. When I was running your example w/ TensorFlow GPU support I got a memory warning/error.
…
W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at constant_op.cc:207 : Resource exhausted: OOM when allocating tensor with shape[32,128,8,8] and type float on /job:localhost/replica:0/task:0/device:GPU:0
…
Is there any way to set this up, so it does not run into any issues? One would think that 16GB are enough for this example?
Thanks in advance for your answer.
Dirk
Hey Dirk, I’m sorry to hear about the issues with the training process. 16GB of memory is way more than sufficient for this project. My guess is that you may be running some other job on your GPU at the same time and TensorFlow cannot allocate enough memory? Otherwise it may be a Paperspace issue. Perhaps try to launch a new instance and see if it’s the same result? Unfortunately I’m not sure what the exact error is, other than it’s likely an issue with the specific instance.
Hi Adrian,
Really excited to get something working from this amazing series. I’m hitting an error running my train.py – I get to the line:
[INFO] compiling model…
and get a traceback error: AttributeError ‘NonType’ object has no attribute ‘compile’
I followed along and created all the scripts while going through you’re posts. I don’t currently have a .model file in my project structure, but figured it would be generated at this point of execution. What am I missing?
Thanks!
It looks like your “model” object was never defined. You do not recommend copying and pasting along with the tutorial. It’s too easy to miss code snippets or point them in the right place. Make sure you use the “Downloads” section of this tutorial to download my code. From there you can compare it to your own and determine what snippet you missed.
Is it possible to convert the saved model to a format that can be used by the Movidius Neural Compute Stick (NCS). From the NCS documentation it seems that it will accept Caffe or TensorFlow format models.
I know “read the docs” but I am wondering of anybody knows off the top of their heads or have even attempted to use the NCS in this context?
I am looking to use this in conjunction with a Raspberry Pi. Not the same kudos as the Apple but a-lot cheaper overall.
Dummkopf. I just spotted your article “Getting started with the Intel Movidius Neural Compute Stick”
Keras models are not directly supported by the Intel NCS SDK and their team but from what I understand it is on their roadmap. There is an open source tool that claims to port Keras models to TensorFlow graphs to NCS graphs but I have not tried it and cannot speak to it (other than it exists).
Hi Adrian,
Thanks for this great tutorial!
I have a question.
After training model with all pokemons. Can I remove a specific pokemon (for example Charmander) such that it can’t be recognized anymore?
How can I do that?
Thanks, I’m glad you enjoyed it! 🙂
You would need to apply transfer learning, in particular fine-tuning to remove or add classes from a trained network. I cover transfer learning and fine-tuning inside Deep Learning for Computer Vision with Python.
Thanks Adrian! You Rock 🙂
Hey, can you also make a tutorial to develop a object detection model using keras: SSD.
Hey Shashank — Deep Learning for Computer Vision with Python already covers object detection, SSDs, and Faster R-CNNs. Give it a look!
Hey Adrian,
First of all thank you for this great tutorial. It helped me a lot! Now I’m trying to deploy Keras model on Heroku with Flask but I couldn’t handle it. Can you make a tutorial about it?
i am on window and run this command
python classify.py –model pokedex.model –labelbin lb.pickle \
–image examples/charmander_counter.png
but getting this error anybudy can help me
usage: classify.py [-h] -m MODEL -l LABELBIN -i IMAGE
classify.py : error: the following arguments are required: -i/–image
It looks like you’re using the command line arguments correctly but it is not finding the image argument. Perhaps in Windows you need to enter all the arguments on one line without the backslash.
Hi Adrian,
I’m a bit confused as to what “% (incorrect) or % (correct)” is telling us.
Say for example we were to try to classify an image of a dog after we train our model, and it outputs “mewtwo: 90% (incorrect)”, what is this telling us? Does this mean that it is 90% sure that it is not a mewtwo?? If that’s the case, how did it come up with the “mewtwo” part being that the input image is titled “dog_test”
I hope the question makes sense
thanks for all your hard work in making these tutorials they are incredibly helpful
thanks
The “correct” and “incorrect” text is determined via the filename. It’s only used for visual validation and to show us that our network correctly predicted an object. It will check the filename for the class label and then compare that to the prediction. If it matches then the prediction is “correct”. If it does not match, the prediction is “incorrect”.
how to decode the predictions? so on the output shows all classes that we have, not only one class?
thank you
Hey Akbar — are you referring to showing the probabilities + human readable class labels for each possible label?
yes, so i can implement that with flask to create rest api
An easy way to do this would be to use the LabelEncoder object’s “.transform” method.
Thanks Adrian for the wonderful post. I have question. If I want to run the model for image size 28x28x4 (28 pixels, 4 bands R,G,B,NIR) where should I modify in the script?
Thanks again
Yes, you will need to modify the network to accept an extra channel provided you would like to pass it through the network.
Hi Adrian, Thanks for your reply. Can you briefly tell me how do I do it? Can you point me to some resources so I can learn how to do it
Unfortunately I do not have any tutorials on the topic and none come to mind off the top of my head. If I come across any I’ll come back and update this comment.
I get an error AttributeError: ‘LabelBinarizer’ object has no attribute ‘classes_’
Can you help me ?
Hey Lisa — what version of scikit-learn are you using?
I am using scikit-learn version 0.19.1
I created this project using scikit-learn 0.19.0 so I doubt that’s the issue. Perhaps try re-installing scikit-learn and see if that resolves the issue.
Hello Adrian,thanx a lot for your contribution.I have tried this and got a error like this
“ValueError: y has 0 samples: array([], dtype=float64)” .Plz help me in this..
What line of code is throwing that error?
Hey Adrian, I have one question. Suppose i have large collection of images say 5000 in each category and i do not want to use data augmentation just to reduce the burden on my CPU. i.e. i want to skip these lines:
aug = ImageDataGenerator(rotation_range=25, width_shift_range=0.1,
height_shift_range=0.1, shear_range=0.2, zoom_range=0.2,
horizontal_flip=True, fill_mode=”nearest”)
How i can do that and how i need to modify model.fit_generator()
H = model.fit_generator(
aug.flow(trainX, trainY, batch_size=BS),
validation_data=(testX, testY),
steps_per_epoch=len(trainX) // BS,
epochs=EPOCHS, verbose=1)
Please help.
Is there a particular reason you want to skip data augmentation? Typically you would use it in nearly all situations. If you do not want to use the data augmentation object you can just call model.fit.
I have already created multiple images from sample images using contrast, brightness adjustment and adding random noise. After combining these different sets i have final collection of data-sets in which every class has around 5000 images. All these pre-processing is done using openCV and python. Also i am working on CPU, so i wanted to reduce the complexity.
I do not want to perform data augmentation like horizontal flip, crop and others because it may eliminate the required region of interest.
Got it, that makes sense. If you have already created your image dataset manually and created the data augmentation manually then you would just call the “.fit” method of the model. That said, I would still recommend creating a custom Python class to perform your required data augmentation on the fly.
Hi, There is another way to write this im not using cmd argument
filename = args[“image”][args[“image”].rfind(os.path.sep) + 1:]
correct = “correct” if filename.rfind(label) != -1 else “incorrect”
You would simply remove those lines. They would not be needed if (1) you are not using command line arguments and (2) your input image paths would not contain the label for the image (which the code would use to validate that the prediction is indeed correct).
Hello Adrian,
Great post as always. I am trying to use the code for binary classification (say cat vs dog).
Gathered around ~200 samples each using Bing API.
1. changed loss function to binary_crossentropy
2. changed the final Dense layer to have one class. (Is this right ?)
I am stuck at ~55% accuracy even after 100 epochs. Both training and test accuracy are low.
What am I missing here ? What needs to be changed ? Really appreciate your help.
Thanks.
No, the final dense layer needs to have as many nodes as there are class labels. If you have two classes you need two nodes in that final dense layer.
I did not get the concept behind it. why you have given same input_shape, each time you are using model.add function.
model.add(Conv2D(64, (3, 3), padding=”same”,input_shape=inputShape))
After every convolutional layer, the input shape should change. Am I wrong? Please clear my doubts.
Thanks.
Are you asking why I explicitly use the padding=”same” parameter? If so, I only want to reduce the volume size via the pooling operations not via convolution.
No, i was asking about parameter “input_shape=inputShape”. Because after every convolutional layer, the input shape should change but here initial input shape of image is provided to every layer.
I am really confused with the parameter input_shape.
The CONV layer is the first layer of the network. We define the input shape based on the parameters passed to the “build” method. For this example, assuming TensorFlow ordering, the input shape will be (96, 96, 3) since our input images are 96×96 with a depth of 3. Based on our CONV and POOL layers the volume size will change as it flows through the network.
For more information, examples, and code on learning the fundamentals of CNNs + Keras I would recommend taking a look at Deep Learning for Computer Vision with Python where I discuss the topic in detail.
Please correct me if I am wrong.
I think what Shubham is asking is, why are we giving inputShape each time we add Conv2D to our model. Is it not enough to give to the first layer alone ?
Rest of the layers, it should be automatically calculated from the previous layer’s dimensions right ?
In this case, even if we pass inputShape to Conv2D in other than first layer keras will ignore it I guess. Even if we remove inputShape parameter in the later layers it should run fine. (it ran fine for me)
Thanks Arun! I understand the question now.
Yes, the input shape does not have to be explicitly passed into the Conv2D layer after the first one. It does for the first, but not for all others. I accidentally left it in when I was copying and pasting the blocks of layers. I’ll get the post updated to avoid any confusion. Thanks Arun and Shubham!
Thanks again for this. I just got back to this today and am running into an issue: It gets all the way to [INFO] training network… and then errors out with (ultimately) this at the end of the traceback:
while using as loss ‘categorical_crossentropy’ expects targets to be binary matrices (1s and 0s) of shape (samples, classes). If your targets are integer classes, you can convert them to teh expected format via:
from keras.utils import categorical y_binary = to_categorical(y_int)
Any ideas what I must have screwed up to be able to get that far, but no further?
Thanks for any help,
Dave
Hi dave,
i guess this error is all because of data. I tried using my own data set and received the same error.
Thanks for such an awesome post. Pokedex.model is an unknown area. Did you code anything there which is not provided. What exactly it is?
Hi Kin — could you clarify what you mean by “an unknown area”? I’m not sure what you are referring to.
Sorry Adrian, I didn’t frame my question correctly. I wanted to understand how to made pokedex.model. Is it pre-built for you prepared it. I am new to Deep learning and Computer Vision. Pardon me if it’s a stupid questions.
The “pokedex.model” file is created after you run the “train.py” file in this post. The “train.py” file trains a Keras CNN. This model is then serialized to disk as “pokedex.model”. If you’re new to deep learning I would suggest working through Deep Learning for Computer Vision with Python to help you get up to speed.
Hi Adrian, thanks for the response. I will definitely start referring that.
Hi Adrian, really thanks for the post, I tried it, and it works great, I added pidgeotto and works great to, but I dowloaded a dataset of food from here
the dataset is the same as the pokemon one, but there are a lot of classes, at the first time, I got the error in this line
image = image.astype(“float”) / 255.0″
Memory error”,
then I tried only with 20 classes and is running, but every Epoch have 500 steps, and every time I see the next message :
W tensorflow/core/framework/allocator.cc:101] Allocation of 33554432 exceeds 10% of system memory.
But is working till now, is too much time, I dont know, maybe my computer is not good enough for running with that dataset, or I have to change the dataset, or make it for parts, I need help, thanks for your time.
This tutorial assumes that you can fit the entire image dataset into memory. The dataset is too large for you to fit into memory. Take a look at Keras’ “flow from directory” methods as a first start. You should also take a look at Deep Learning for Computer Vision with Python where I demonstrate how to work with datasets that are too large to fit into memory.
Hi Adrian!, great job! i have a question… i was testing the neural network for facial recognition and the result i think was good with the training set, but with my testing set it shows “incorrect” and displays the correct name of the face label. and that result confused me. can you explain me why that happens? i’m new on this and i want to learn and understand more about it. pls help me to understand why it recognizes the face but shows incorrect.
I should really remove the “correct/incorrect” code from the post as it seems to be doing more harm than good and just confusing readers. Keep in mind that our CNN has no idea if it’s classification is correct or not. We validate if the CNN is correct (or incorrect) in its prediction by letting it investigate the input file path. If the input file path matches the correctly predicted label, we mark it as correct. This requires that our input file paths contain the class label of the image. This is done only for visualization purposes.
Again, if it’s confusing you, ignore that part of the code. I’ll be ripping it out of the post next week as again it’s just causing too much confusion.
Hi Adrian, I face this problem when I try to compile the code as you mention:
“TypeError: softmax() got an unexpected keyword argument ‘axis'”
any idea how to solve this?
thanks for your help
Hey Lee — what version of Keras are you using? I haven’t encountered that particular error before.
Hi Adrian
can the above code run on a laptop? for example laptop i use i5 and 8gb ram?
thanks in advance, very cool tutorials
Yes, the code in this tutorial can run on a laptop (you do not need a GPU). If you want to use a different dataset keep in mind that this method will store the entire image dataset in memory. For a large dataset you’ll run out of RAM so you would need to either (1) update the code to apply Keras’ flow through directory or (2) follow my method inside Deep Learning for Computer Vision with Python where I demonstrate how to serialize an image dataset to disk and then efficiently load batches from the dataset into memory for efficient training.
Hi Adrian,
How can i test on batch not on individual images? i want to test it on batch.
The
model.predictmethod will naturally accept batches of images, and in fact, our code is already working for batch processing, we are just using a “batch of one” for this example. To build a batch with more than one image you would loop over all images apply the pre-processing steps on Lines 26-29, building a NumPy array of images as you go. From there you can pass the entire batch through the network. If you’re interested in learning more about batch image classification be sure to refer to Deep Learning for Computer Vision with Python.
Hi adrian,
i was thinking what if in test images there are also images which doesn’t contain Pokemon toys then what output it should produce.?
I would suggest training a separate class called “background” or “ignore”. Take a look at this blog post for more information.
Hi, if we have 1000 pokemon images for each class, how we know wich epochs and batch size would be correct in order to have a good accuracy?
The number of epochs and batches are called “hyperparameters”. We normally run many experiments to manually tune such hyperparameters. The batch size wouldn’t typically change (it’s normally the largest value that could fit in your GPU). The epochs may change but you would manually run experiments to determine this.
Hello
I am trying to implement my own dataset on this CNN model . Is it possible for the CNN to take multiple images at the same time and then classify. For example If i give 20 images of just charmandar during the testing phase and the network would use all those 20 images and make a decision based on those images that what type of pokemen it is?
Thank you
Yep! What you are referring to is called “batching”. CNNs naturally batch process images. You would build a NumPy of your (preprocessed) images and then pass them to the .predict method of the model. The model will classify all 20 of your images and return the probabilities of each label. If you had 20 images and 100 classes you were predicting your returned array would be 20×100.
what if I give it 20 different images of the same pokemon and I want only one prediction?.
For my application I have a time series classification problem i.e I have the data which has multiple time steps(samples) and each time step has multiple images but the same class and I want the model to take one time step consisting of multiple images and predict based on that complete time step
Also I do not know If I can specify each sample during the training phase or not to improve the accuracy of the model
There are a few ways to approach this but the most simple method would be to make predictions on all 20 images and then average the probabilities for each class together.
Hi Adrian,
In the “limitations” section you mention that “When this happened, I examined the input image + network more closely and found that the color(s) most dominant in the image influence the classification dramatically.”
How do you delve into the model to find out what “features” such as color has the most “weight”. I wouldn’t have thought that the model is human readable?
Regards,
Tom
You can visualize the activations for each layer. This article on the official Keras blog will help you get started.
Hello. I am trying to do a model with a different dataset (only with two classes), and I stuck in this error, and I don’t know how to fix it. Could you bring me a hand with it?
This is an image of the error that I mentioned:
Thanks.
The scikit-learn implementation of LabelBinarizer will not work for only two classes. Instead, you should use the “np_utils.to_categorical” function included in Keras. Also make sure you swap out categorical cross-entropy for binary cross-entropy. Be sure to refer to this post to help you get started.
Hi Adran,
Your posts and your books are highly inspirational and every time I read it enlightens more on these technologies. I tried this example as is and works absolutely fine and results are amazing. I have a question for you. I am already having both your books ppcv and DLCV (P bundle)
I just want to keep 32 pixel X 32 pixel images of training data which a shape of an object. Now I tried the same code. It fails. It gives me the following error
ValueError: Error when checking target: expected activation_7 to have shape (None, 2) but got array with shape (106, 1)
So can you please help me where I have to change the code in the above exercise.
Based on the error I think you have an issue parsing your class labels. Double-check your label parsing and ensure they are vectorized properly.
Hi Adrian
I am experiencing the same error.
How do I check that they are vectorized properly?
After the “for” loop started on Line 51 ends just write your labels to your terminal:
print(labels)
Make sure the output is what you expect. In this case your input paths are likely incorrect in which case the labels list won’t be populated properly.
Hi Adrian,
When I run the code, these warnings pop up:
libpng warning: iCCP: known incorrect sRGB profile.
Is that anything I should be worried about? Should I modify my training data??
I just found your blog and running the code examples are really easy & well written compared to others… Will your new deep learning book go on sale again? I should have jumped on that!
Thanks…
That is a warning from the libraries used to load PNG images from disk via OpenCV. It is just a warning, it can be safely ignored and will not have an impact on training your Keras model. As for a sale on my deep learning book, no, I do not have any plans to run another sale.
Hi Adrian,
Can you help me out on one more tip? When I run the classify.py file after training, I get an error:
classify.py: error: the following arguments are required: -i/–image
I’m also on a linux OS and everything is working up here… Thanks so much for your time to respond…
If you’re new to Python command line arguments you’ll want to read this blog post.
Hi Adrian!
I want to perform image classification on a dataset made of 1000 classes of very similar objects (medical pills). I am going to fine-tune a pre-trained model like mobilenets or Inception and then my idea is to deploy the model in a mobile app (Android).
I am wondering about the hardware limitations of the smartphone because the majority of tutorials and examples of mobile applications regarding image classification or object detection focus on a limited amount of classes. I am not sure if this methodology of the 3-post series is adequate for my specific problem, what do you think?
Besides, I am worried about the similarity between the classes, which I believe would be an obstacle to obtaining a good performance!
Do you think it is possible to achieve a good performance?
Thank you so much for this series of posts, I really appreciate your work! Keep going!
1. You’ll likely want to use a different architecture than the one I discussed here but keep in mind that state-of-the-art networks such as MobileNet can run on mobile devices. I wouldn’t be too worried about that yet..
Hi Adrian,.
Hi,
This is great as usual.
I am wondering how do you chose the model to classify with in testing. The last (100th) epoch may not be the best. So, do you choose the one with the best validation accuracy ? Or the smallest validation loss ?
Regards,
Xavier
It really depends on the application. Keras includes methods and callbacks to handle serializing the “best” model based on whichever metric you choose.
Hey Adrian!
First of all thank you for such a great post! I am trying to classify the aerial satellite images which consists of one roof in every image and I am trying to classify them into their roof types. I have 3 classes with around 9000 images per class. Do you recommend neural network from scratch since I don’t see any pre-trained model with such data similarity so I am a little dubious about transfer learning. Also, do you recommend data augmentation?
Also, I tried using your pokedex network for the same dataset but it validation accuracy seems to fluctuate a lot. Do you have any inputs that might help me?
Thanks again!
Hey Rye, there are a lot of things that can be addressed in this project but I would suggest backing up a bit:
1. Are you trying to perform classification, detection, or segmentation?
2. Unless you have a very specific reason not to you should always apply data augmentation.
3. Keep in mind that the Pokedex network accepts 64×64 input images. Without knowing what your images look like it’s hard for me to recommend a spatial input size but if you’re using aerial/satellite images you’ll likely need larger image dimensions.
I am trying to perform classification of roofs. I have been able to extract aerial images with each image containing exactly one roof and I want to determine the type of the roof through the image. Each image of approximately of 256*256 size and I changed my network a bit accordingly and it gives me an accuracy of approximately 90%. My current network has 4 blocks of CNN with each block containing two layers.
The first layer has 2 ConvNet of size (64,110), batch normalized, 2d pooling and dropout of 0.15. (relu)
The second layer has 2 ConvNet of size(84,84),batch normalized, 2d pooling and dropout of 0.20. (relu)
The third layer has 2 ConvNet of size(64,64),batch normalized, 2d pooling and dropout of 0.20. (relu)
The fourth layer has 2 ConvNet of size(128,128), batch normalized, 2d poolind and dropout of 0.20 (relu)
The final layer is a dense layer, of 1024 and then number of classes, softmax activation and dropout of 0..50.
I chaged my input dimensions to 112*112 and for 120 epochs, batch size of 48 and data augmentation it performs okayish and I get and accuracy of around 90%. I tried using inception v3 pre-trained model, froze some of the layers and used my above mentioned last layer as the last layer but I don’t get a result better than 80% from that model.
Any input from your end to make the model perform better would be appreciated!
Thank you,
Rye
Thanks for the added details although I’m a bit confused by what you mean of 2 CONV layers of size 64×110. Are those your output volume dimensions? Or number of filters?
As far as fine-tuning goes you may want to continue to tune your hyperparameters. You may want to apply feature extraction via the pre-trained net and train a simple linear model on top of them.
In general I would recommend that you work through Deep Learning for Computer Vision with Python so you can gain a better understanding of how to train deep neural networks, including my best practices, tips, and techniques.
Hi Adrian,
After running the training script, all the ouptuts are generated OK (model, plot, lb), but i get the following message:
Exception ignored in: <bound method BaseSession.__del__ of >
Traceback (most recent call last):
File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py”, line 701, in __del__
TypeError: ‘NoneType’ object is not callable
Any idea of whats happening
This is just a bug in TensorFlow where the session manager is having an issue shutting down. It does not impact the results of training the model. You can safely ignore it.
I am in training neural network, But it seems quite long process, So I reduce the Epooch to 3 ..
To see first the result, How its working ?
After Training,
python classify.py –model pokedex.model –labelbin lb.pickle \ –image examples/charmander_counter.png
The above line will work to see results ?
Yes, training the model can take a bit of time, especially if you are using a CPU. If you would like to see the first result you would need to execute the classify.py script as you suggested.
hey adrian,
thanks for the post..
can i get an option on command prompt to change the no of epochs,no of cnn layers,batch size,filter size etc using argparse without always editing the code..
if so,how can i.?
Hey Vamshi, are you asking how to edit the command line arguments to include the number of epochs and batch size?
yes sir..
or can i add a separate config file which can change the variables like epochs,batch size as given in below link..how to do that
It’s totally possible but you would need to edit the code significantly. I would suggest creating a configuration file and then loading the configuration file via a command line argument. Then pass the configurations into your optimizer, model, etc. It will require you to refactor the code to handle additional parameters to the constructor. Again, it’s possible, but I would only advise you to continue if you feel comfortable enough with your programming skills.
Thanks Adrian. very well and good job. I have a question. Is there any way to draw bounding box around each predicted object? Is this tutorial an object detection or a classification problem? thanks a lot.
What you are referring to is “object detection”. I would suggest you read this blog post which will help you get up to speed.
Hi adrian,
It’s a great tutorial !!!!. I followed all of your instructions in here, but i have a question for you. Why your deep learning model when i applied a new pokemon such as Raticate, resulted similarly with charmander ? because logically, it should have low probability in all of trained pokemon animals
Hey Kemas — this model was not trained on Raticate so the model has no idea what Raticate actually looks like. You might want to take a look at this post where we introduced another class to to train on, a “background”, indicating that the input image/frame should be ignored.
Hi Adrian
Am a newbie to ML and your blogs have been really helping me! Thanks a lot.
Q. You used Lenet architecture earlier to solve a similar problem (Santa/not-Santa) and here you have used VGGNet. But in both cases, you trained the model only on your data, and aren’t depending on pre-trained data (like keras blog suggests to use vgg16 directly for cat/dog classification). Do you believe that would potentially increase the accuracy even further?
Generic Q – how do you judge which approach works best, without trying out different options. I understand that depends on the problem, and the classes one is going after; but is there is an implicit qualitative ordering?
1. I’m actually not using VGG16. I’m training a smaller version called “SmallerVGGNet” from scratch. The network is inspired by the VGG-family of networks but is not pre-trained on anything. You could certainly use “transfer learning” (which is what you are referring to) to potentially increase accuracy.
2. I’m not sure what you mean by “implicit qualitative ordering”. Perhaps you can elaborate?
Thanks for clarifying. All I meant was how do you know which approach to try for any given image classification problem – Lenet, VGGNet, ResNet etc.. or for that matter something not involving Deep Learning.. Or do you try all approaches, and then figure out which gives the best results..
Got it, I understand now. I would suggest taking a look at Deep Learning for Computer Vision with Python where I provide all of my best practices, tips, and suggestions when approaching an image classification problem with deep learning.
Hi Adrian,
I have trained the above pokedex model on three different lables (images) say Apple, Mango and pineapple. I ran for 50 epochs. Now when I try to classify Mango and pineapple its correctly classifying with a decent accuracy. But if I give any other image like mobile phone that also classifying as either Mango or pineapple, How do I get out of this problem
Please suggest .. Thanks
Sridhar
You need to include a “background” or “ignore” class and train the model on random images that it may encounter in a real-world scenario that are not part of your fruit classes.
Hi Adrian,
Can you elaborate more on how to include a “background” class. I’ve noticed that the pretrained model you provided displays “background” when I use it in an iOS app. When I train the model myself using your training script and dataset, the model performs well at identifying Pokemon, but unfortunately it also mistakenly identifies just about everything else as an arbitrary Pokemon with a high degree of confidence. The object doesn’t even need to contain colors that are similar to Pokemon.
The “background” class is is images of non-Pokemon (i.e., images you want to ignore and are unrelated to the Pokemon classes). You could create the “background” images yourself by sampling an existing dataset, grabbing images from your computer, Phone, Facebook, etc.
hi Adrian
Thanks for this tutorial, it is very helpful for a newbie.
You saved the model weights and labels separately but I have seen others which saves the model as signature, graphs and variables. I tried saving this model using SavedModelBuilder (model.pb, variables.data and variables.index) but is unable to load it again for subsequent classification.
Any suggestion/comments on using a different model save and reload is appreciated.
Thanks
Akusyn
The “SavedModelBuilder” function is actually a TensorFlow function. We’re using Keras in this blog post. You need to save the model using “model.save” I don’t believe “SavedModelBuilder” is compatible directly with Keras models (but I’ve never tried eitehr).
Adrian,
Great post.
Any thoughts on which libraries to use for prediction?
We can predict with keras, opencv dnn, dllib? Which one should we choose? What it the best practice?
Thanks,
Igor
If you used Keras to train your model, I would suggest you use Keras for prediction. If you used dlib for training, use dlib for prediction.
Hi Adrian,
Great tutorial! I’ve been struggling forever on finding out how to format the training and testing data and labels.
I’m currently doing object detection and classification and currently have a satellite dataset consisting of image chips (224×224) and each chip has multiple objects and classes. So what would the y_train and y_test look like? From all of the examples I’ve seen it looks like the ground truth data consists of a single class label per sample (i.e. a classification problem). My ground truth data consists of multiple bounding boxes and class labels per sample (e.g. image chip).
Do you have any suggestions on how I should format/structure my data based on the ground truth? Thank you for your time!
Keep in mind that object detection and image classification are different. I would suggest reading through Deep Learning for Computer Vision with Python where I discuss my best practices for both object detection and image classification, including how to format and annotate your data. I think it will really help with your project!
Always got “incorrect” result when predict the image, but the prediction is correct. i’m confused about “correct” and “incorrect” conditions here, could you explain to me the problem what i’ve got? Thanks in advance
You should refer to my reply to Jay for a detailed discussion on “correct” vs. “incorrect”.
Hi, Adrian. very big thanks for all your helpful knowledge. I am working in the project of simple “autonomous driving” based image depth using CNN. I am a little bit good in CNN, learned from your blog, but still confused how to compute image depth map using CNN. would you please guide me or give me a guidance how to perform CNN in that case.
Thank you very much in advance for your kindness.
here are sample paper found from internet used CNN for image depth:
Hey Miguel, it’s awesome that you are studying computer vision and deep learning. I don’t have any guides on estimating depth via single images with CNNs. I might be able to cover that in the future, but I don’t know if or when that may be.
Fantastic post… Did try and got exact result that i wanted. Thank you so much Adrian !! ..
Congrats on your successful result, Diptendu! Nice job.
Hi Adrian,
I have been following you from long now.. I ran the above code with my own dataset. So I have a question. I had 9 classes and each class has 1020 images. I can see that the data is divided into 80% training data and 20% validation data. Now when I am training on my dataset the training is happening on only “229 Images”. So I tried to figure out why but I think I will need you help in this.
so please let me know what am I doing wrong here.
Thanks,
Aniket
Hi Aniket,
This line:
imagePaths = sorted(list(paths.list_images(args["dataset"])))
…will grab all images in your dataset. It assumes that your image classes in your dataset are organized into directories similar to the how the dataset is organized according to Pokemon species. To verify that all of your 9*1020 images will be used for training, just print the length of the list:
len(imagePaths)
I hope that makes sense.
I can train another picture?
You can use this code to train your own CNN on your own custom image datasets.
How can i get vector of features for every image in dataset?
Are you referring to transfer learning, and specifically feature extraction, using a CNN?
Thank you so much for this simple beginner post
I didn’t use Keras before now I can 🙂
Awesome, congratulations Marwa!
Thanks for sharing your knowledge Adrian!. I’d like to do the same thing but to recognize the number of fingers I am showing. I have 5 folders with count 1 ,2 ,3 , til 5 fingers. I would like to get some help because I am getting a dimension error when using your code. Can you please guide me what things I can change so it will run. Thanks!
I actually cover that exact problem (and include code to solve it) inside the PyImageSearch Gurus course. Be sure to take a look!
This has really helped me understand ML. I have actually modified this and am using my data and I can get great accuracy (greater than 90%) with my examples!
What I am trying to do now is to pass it a directory of images instead of single image for classify.py. These image wont have the predicted names (i.e. you would change your charmander_counter to just be pokeman_counter) and I want to have the model.predict tell me if that image is a Charmander, squirtlle, etc. and then save that image out with the % and predicted label (e.g. img1_charmander_95per.jpg)
Thoughts?
Congrats on training your model and having it working, Sean! Nice job.
To solve your problem you would need to:
1. Loop over all images in your directory of input images
2. Load each of the input images and preprocess them
3. Append them to an array (which is your “batch”)
4. Pass the batch through the network using the exact same code in the guide
From there you’ll be able to loop over the results and obtain your probabilities.
For more information on how to get started with CNNs, build batches, and make predictions, I would recommend working through Deep Learning for Computer Vision with Python where I include lots of practical examples and code to help you accomplish your project.
Thank you for such a great tutorial
But I’am facing some problems like “Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2”
Please suggest a solution.
It’s not an error message, it’s just a suggestion from TensorFlow that you could further optimize your pipeline. It does not affect your code. Ignore it and keep going 😉
I have a problem, even after changing loss function to binary for 2 labels as well. But we keep getting a problem saying activation_7 was expecting (2,) but got (1,).
I’m using 640 x 480 images that are 500 images per label. Total 1000 images. Not sure how to solve the problem.
I think you may be parsing your class labels incorrectly from the file paths. Double-check that your class labels were parsed correctly.
Hi and thank you for this wonderfull tutorial ! I am facing the same issue as @Arkhaya with the error :
ValueError: Error when checking target: expected activation_7 to have shape (2,) but got array with shape (1,)
when training the network. My labels are correctly extracted from the files paths, even if I dont really understand how the binarizer work. Do you have suggestions ? Thanks again
How many labels are in your dataset? Keep in mind that you need at least two unique image categories to train the network. According to your error you may only have one class label.
Hi Valentin!
I have same issue even images are dog and cat. Can you kindly tell me how you solved this.
Hi Adrian,
thanks a lot for your posts, they are really great!
2 questions:
I have a dataset with pictures in a large number of different sizes, what are the considerations when coming to select the size which all the pics have to be resized to it?
If the selected size won’t be 96*96, what are the rules which according to them I have to change the smallervggnet?
Your input image dimensions are typically dictated by which CNN architecture you are using. Typically input image dimensions include 32×32, 64×64, 96×96, 227×227, and 256×256. If you want to increase your input image dimensions for SmallerVGGNet you would likely need to add more layers to the network, but keep in mind that the more weights you add, typically the more data you’ll need to obtain a reasonable result. I would suggest you read through Deep Learning for Computer Vision with Python for more information.
Can I use it on Raspberry Pi?
Yes, but I would recommend that you only run the trained network on the Pi. I would not recommend actually training the network itself on the Pi.
Hi Adrian,
when I run train.py, on my anaconda prompt just showing “Using TensorFlow backend.” and the program is stop.
What should I do? thanks.
Are you sure it has fully stopped and hung? Check your system monitor and ensure the process is still busy.
Thanks adrian, it has been solved. I already run the py script and I got my model.
And I have another question, Can we use GPU on windows OS for modelling the image instead of using CPU?
Yes, but I do not support Windows officially here on the PyImageSearch blog. You’ll want to install TensorFlow with GPU support. From there TensorFlow, and therefore Keras, will automatically access your GPU.
Excellent article! Thank you.
I have a question. You resize images to 96×96 px for both training and classification. I have used 32×32 px and it does not affect classification accuracy negatively, but rather speeds up training process, as well as brings minor classification speed ups in loading time (4 ms vs 3 ms) and trained model size (9 mb vs 90 mb). I have about 300 images per category (1500 in total).
Don’t get me wrong, please. I certainly have no doubts that you definitely have reasons for using 96×96, but rather want to know “why”.
Again, thanks a lot for your time and efforts!
Thanks Artur, I’m glad you liked the tutorial!
As for why you would choose varying image sizes for a CNN, it is entirely dependent on your dataset. For example, if objects in images are super small in a 96×96 image they would be virtually invisible if resized to 32×32. But if your object is the most dominant region of the image then you may be able to get away with a 32×32 image. Again, it’s highly dependent on your exact use case, your dataset, and how much quality data you have.
Hello sir, your books and tutorials are just great.
When I read your book and implementing the code it works fine but now I got this error “Nonetype object has no attribute compile”.
It sounds like you introduced an error when copying and pasting the code. Make sure you use the “Downloads” section of the code to download the source code, ensuring it matches mine.
Hi Adrain:
I am very excited to see this blog, and tried to which runs well. But I’ve met one problem, I tried other pic which is not in these class, but it will become one of these class and the correct rate is very high. How can I solve it?
You need to add a separate class to the architecture and name it “unknown”, “don’t care”, or something similar. Then, fill this class with random images your classifier may see but shouldn’t care about. From there, train your network.
Hi, Adrian.
Thank you! I love this tutorial and the model covered here is easy to apply.
I would like to add some classes and train the pretrained model but I don’t know how.
Could you show me how to update the model and lb?
Hey Andie, you can simply replace my “dataset” directory with your own dataset where each class label has its own subdirectory. If you follow my exact directory structure you’ll be able to train the model on your own dataset. If you’re looking to apply fine-tuning (i.e., training a pre-trained model) you should see my example inside Deep Learning for Computer Vision with Python.
Hi Adrian!
I am trying to perform image classification using CNNs and my code is based on yours. However, my validation accuracy is much lower than the training accuracy.
After 50 epochs, I get 60% accuracy for training but only 20% for validation.
My dataset is limited and i am trying to classify 1000 different classes of medical pills. I have only 10 images per class. I performed real-time augmentation which allowed me to enlarge my dataset. How can i get better results? Besides, my training loss is dropping well, reaching 1.5 while my validation loss stops at 5/6. How would you face this issue?
Thank you!
As someone who’s built software to recognize nearly 10,000+ unique prescription pills, I can tell you that the problem is extremely challenging. With only 10 images per class it’s very, very unlikely that you’ll be able to recognize 1,000 different prescription pills unless you are doing some sort of triplet loss/training procedure. I would suggest investing your time in obtaining more training data. I would also suggest working through Deep Learning for Computer Vision with Python where I share my suggestions, tips, and best practices when training your own CNNs on challenging datasets.
Hello, Adrian. This was a great post as usual. You are the best. In the post, you mentioned that you would deploy the program to a smartphone app. I couldn’t find that post. Could you please share the link?
Yes, see this tutorial.
hi,Adrian ! i get problem when used the train.py,that is it saied allocation of exceed 10% of system memory.what should i do? Thank you so much for you to answer this question for me .
It sounds like your machine is running out of RAM. How big is your image dataset? How many images are you working with? And how much RAM does your machine have? If you are working with datasets too large to fit into memory make sure you refer to Deep Learning for Computer Vision with Python where I discuss how to train CNNs on large datasets.
Hi Adrian, great post! When I use your data set training works fine. I’d like to try to prepare model to distinguish 2 classes. I put my images into two separated directories inside dataset directory. My images are RGB images.
However I have some issues with my data, I do not know why the dimension of trainY and testY are: (1106, 1) and (277, 1), instead of (1106, 2) and (277, 2) because of two classes. Do you have any idea what might be wrong?
The
LabelBinarizerclass will return just integers for 2 classes rather than one-hot encoding. Use Keras’
np_utils.to_categoricalinstead.
Hi Adrian, You mentioned that the SmallerVGGNet was designed for 96×96 image pixel right?
suppose I want to modify the image dimenstion into 300 px, would you mind giving me tips in which part of the SmallerVGGNet.py I should change?
Cause I tried to train a bunch of food images using your code, and it keeps resulting a low accuracy, So I think, perhaps I can’t train the food images with 96px.
It’s unfortunately not that simple. SmallerVGGNet was designed with a balance between (1) image dimensions and (2) dataset complexity. You’ll want to consider if your dataset requires a network with a larger depth to accommodate the increase in input pixel dimensions. I would suggest referring to Deep Learning for Computer Vision with Python where I include my tips, suggestions, and best practices when creating and training your own custom deep neural network architectures. Be sure to give it a look, I’m confident the book will help you.
Can I get the dataset in the above system you implemented??
Yes, just use the “Downloads” section of the tutorial to download the source code + dataset.
Hey Adrian!
How did you run Ubuntu for this tutorial?
I’m running Ubuntu 16.04 on Windows 10 and if I’m thinking correctly, Ubuntu can’t access the directory I’ve set up on Windows with all the pictures. Is this correct and if so, could you recommend a work-around?
Also, do all the images in the dataset need to be the same resolution or can they vary? If they need to be the same resolution, how would you ensure that using Bing Image Search API?
Thanks!
I haven’t tried the Windows/Ubuntu integration (I haven’t used Windows in 11+ years now) but my suggestion would be to transfer your directory of code/images to Ubuntu via SFTP, FTP, Dropbox, or whatever is most convenient for you. From there you can execute the code from the Ubuntu terminal.
As for your second question they don’t have to be the same resolution.
I tried Modifying the code to take the video stream as input but I am getting 0.05 fps why is this classification so slow?
That is very, very slow. It sounds like there is a logic error somewhere in your code. Try using this tutorial as a template for classifying individual frames of a video stream with a Keras CNN.
Hi Mr Adrian thank you for all your effort to explain and facilitate deep learning
i ask it’s possible to recognize person from the dog and cat program like a first experience for a beginner and to classify just two person, thank you in advance.
Hi! Your articles are super fun and useful, so thanks!
I was trying training my own data set, but this time only with two categories, and i got the follow error :
alueError: Error when checking target: expected activation_7 to have shape (2,) but got array with shape (1,)
99.9% percent of the time (at least with my code) the error is due to your directory structure being incorrect. You can verify by reviewing the parsed class labels — you’re likely parsing out the incorrect label from the file path. Double-check and triple-check your label parsing.
Hi,
I faced the same error.
I didn’t do anything on your code or folder structure but deleted 3 folders (./data/mewtwo, ./data/pikachu and ./data/squirtle).
I wonder whether this code works for 2 classes.
Please help.
Thank you.
For only two classes scikit-learn’s LabelBinarizer will only produce integer encodings, not one-hot vector encodings.
To resolve the issue use the LabelEncoder function and then Keras’
np_utils.to_categoricalfunction.
Hi Adrian,
Training a deep neural network on a huge dataset is really time consuming. Is there any way to resume training starting on a particular epoch and iteration using Keras?
Absolutely. You can use Keras checkpointing to save a model to disk every N epochs. From there you can re-load the model via the
load_modelfunction and resume training. I cover exactly how to do that inside my book, Deep Learning for Computer Vision with Python.
Hi Adrian,
Thanks, you described it perfectly in your book in chapter 18.
I highly recommend “Deep Learning for Computer Vision with Python.” to everyone.
Thanks so much, Tomasz!
Hi ! Thank you for this great tutorial !
I’ve managed to run your code on my computer but it seems that the model won’t converge.
I can’t reach the 97% of accuracy. I’m merely about 90%.
Do you kow where this come from ?
Thanks
Hi Adrian,
I have read your tutorials about object classification using SmallVGGNet. However, this architecture only supports low image resolution (96,96). I can’t use this architecture in my case in which I want to classify individual animals using only 3 or 4 high resolution images/individual for training. The resolution of images captured by Pi Cam V2 is 1944 × 2592 that I’m going to reduce to around 450×600 to ensure it still retains important information of patterns on the animal skin. I just wanted to know any suggestion from you on which architecture I can use for my case? Do you have any tutorial to support high resolution images? Thank you Adrian.
You need more images. 3-4 images per individual animal is not enough. Additionally, you should look at triplet loss and siamese networks — they may work better in this case.
Hi Adrian,
I’m a beginner on DL and I started with the basic fashion-MNIST to practice, I read in another blog about a similar CNN model that instead of using RELU activation they use LeakyRELU, saying that it is better since some neurons tend to “die” with RELU, also I tested their implementation of the fashion-MNIST against yours to compare, and the time using their code was less than half then yours, although your accuracy was better, why such a difference?
There are a variety of various activation functions including standard ReLU, Leaky ReLU, ELU, and other extensions. They are hyperparameters of your network that can be adjusted. I typically suggest using ReLU when building your initial model. Once you are able to train it and obtain reasonable accuracy swap in a Leaky ReLU or ELU and you might be able to get some additional accuracy out of it. I cover these activation functions and best practices on how to use them inside Deep Learning for Computer Vision with Python.
i just want to know how to create a custom model in cnn with datasets which include photographed images.pls let me know
You can use this tutorial to train your own CNN with a custom dataset. Have you given it a try?
If you are looking for a more detailed guide on how to train your own custom CNNs be sure to read through Deep learning for Computer Vision with Python.
Hi Adrian,
thanks your best tutorial, I have some question,
Q1- If we have the tensorflow model, how i can convert that model to keras for using in the ios?
Q2 – If we have one more model, is it possible to run on ios together? that’ mean, i want capture a image and feed into the model-1 and pass the result of the model-1 into the model-2?
If it’s possible, publish a new post about deploy the model on android.
Thanks,
I have a tutorial on Keras and iOS that you should read first. If your model is already in TensorFlow format then you can likely just use TFLite on the mobile device.
expected activation_7 to have shape (2,) but got array with shape (1,)
when I change the folders in side the dataset to two folders (labels) it give this error
You need to call
np_utils.to_categoricalon the labels after you transform them. Unfortunately the LabelBinarizer function will return integers if there are only 2 classes — I have no idea why they decided to implement it that way.
Hi Adrian
What additional code would you add to generate plots of ROC curve and PR curve. I would like to generate them in my model and present some arguments. pls help
Take a look at the scikit-learn documentation which will show you how to create and plot a ROC curve.
hey me agian I resolved that error but Im getting a warning libpng warning
You can safely ignore the warning, it will not affect the loading and training of the model.
i have created a CNN model with 3 classes ( vehicles,birds,people ).
now i have to do single prediction .
how should i do that ? or which blog should i prefer ?
If you are new to deep learning, training your own models, and making predictions, you should definitely read through Deep Learning for Computer Vision with Python where I teach you the fundamentals of deep learning and how to use Keras. Definitely give it a read as it will not only solve your problem but make you a better deep learning practitioner as well.
Hello Adrian,
Brilliant tutorial. I’m a beginner at keras programing so your tutorials help a lot. I have used the source code for my image classification. I have 5 classes with a total of 1390 images. Also the images are in black and white so i modified the code for image dimensions to 96,96,1. I hope this is right. However when I run the train.py, I get the error “ValueError: Found input variables with inconsistent numbers of samples: [1016, 1390]”. I spoke with a colleague of mine and mine and he said the dimensions are not equal. However the dimensions are the same for all images i.e. 512×512. Could you please help. Thanks
You’re missing a few steps. Are you trying to train on images that are 96x96x1 or 512x512x1? You need to set those as your
IMAGE_DIMS. Secondly, you need to convert your images to grayscale via
cv2.cvtColorfirst.
If you’re new to deep learning and Keras I would definitely recommend you read through Deep Learning for Computer Vision with Python first. The book will teach how you to train your own custom CNNs on your own datasets (including adjusting input image dimensions and grayscale conversion).
Hi Adrian
Thank you very much for your always helpful post. I applied your code on my face database for face recognition purpose, it’s good but I’m new in keras and CNN and I would to ask you about the database split how can I do it. I need to evaluate my model on unseen data, is the unseen data that you test or classify your model on them are part of the original database and you split them for testing the model? and if it yes, what is the portion that I should to split it from each individual from the face dataset to become unseen data using for testing or evaluating the accuracy of my model to recognize that face? Can you please help me, I will be thankful for you.
Also I would like to ask you when I retrieve the model and label that are saved to recognize the unseen data, the process is very very slow. I was split about 20% from my database as unseen data to evaluate the model, when I trained the model it go very fast, but when I want to evaluate the model on unseen data It is stopped on GPU say (Out Of Memory) and when I test it on CPU, It stills many days, Is what I did correct or I failed in specific point? Why the training go fast and evaluation is very slow? how many portion that should I split it from database as unseen data to be evaluated? I hope your help Thanks a lot.
Regard
Typically you wouldn’t use a “standard” CNN such as this one for face recognition. You would use a siamese network with triplet loss, such as this one.
To address your other question related to data splitting and running out of memory, make sure you read through Deep Learning for Computer Vision with Python which includes my tips, suggestions, and best practices for data splitting and working with large datasets.
Hey Adrian !
I am training a model for two classes. I have changed “loss” to “binary_crossentropy” from “categorical_crossentropy”.
I am getting this error:
ValueError: Error when checking target: expected activation_7 to have shape (2,) but got array with shape (1,)
Can you please help me with this?
It got solved. I followed your reply for Hassan’s question.
Congrats on resolving the issue!
Your question has been addressed in the comments a few times. See my reply to Daniel and Tomas.
how to determine the layers and number of filters in CNN and max pooling
I would suggest reading through Deep Learning for Computer Vision with Python where I show you my tips, suggestions, and best practices to determine the # of layers, filters, etc.
Thanks sir for such a great article. Can you please tell me how can I get output of all the layers for an input image.
I cover how to access the outputs of individual layers inside Deep Learning for Computer Vision — definitely consider starting there.
this is great, but could you tell me how to use it (trained model and label) on webcam/live detection?
Thanks
You mean something like this?
hi i just try this tutorial but didn’t get accurate result what shoulde i do now
Were you applying the tutorial to your own dataset? Without knowing more details on your dataset it’s hard to say what’s going on. My recommendation would be for you to read through Deep Learning for Computer Vision with Python where I not only show you how to train your own CNNs, but also provide my tips, suggestions, and best practices.
Hi Adrian, thank you very much as always!
Can I ask how do you determine if the model is overfitted or underfitted from the loss difference between Train, Validation and Test?
Thank you very much!
I cover that exact question, including how to spot underfitting and overfitting, inside Deep Learning for Computer Vision with Python. I suggest you start there.
Thank you for the great article.
I also tried this, but there is a bug that the acc cost is always 100%.
For the solution of this,please tell me the version of the library you used.
If possible, I also want to know the versions of python, keras, tensorflow, coremltools.
sorry,this is mistake.
Congrats on resolving the issue!
Hi Adrian,
thanks for this precious tutorial
I might have a stupid question but, why doesn’t this work well for classifying human faces ?
Take a look at my face recognition tutorials where I cover that question in more detail.
Hi, Addrian. What i would alter for i do SmallerVGGNet with 16 x 32 images. I have dataset of eyes, nose and mouth (three region of face).
Thank you very much!! for the great article.
You could either:
1. Resize all 16×32 images to be 96×96
2. Or you could use a smaller CNN, such as MiniVGGNet (covered here), and then modify it to accept 16×32 images.
For people that have:
ValueError: Error when checking target: expected activation_7 to have shape (2,) but got array with shape (1,)
Reason:
The labels input array should have as many columns as amount of classes: there should be 1 if the column corresponds to the class number and 0 otherwise. There is a function keras.utils.to_categorical() that converts a class vector (integers) to the abovementioned binary class matrix. Got it from here:
Solution that helped me:
# add imports for keras.utils
from keras.utils import np_utils
# binarize the labels and convert to categorical
lb = LabelBinarizer()
labels = lb.fit_transform(labels)
labels = np_utils.to_categorical(labels)
Correct, that is my biggest gripe with scikit-learn’s LabelBinarizer class. I don’t know why it won’t return a vector for a 2-class classification problem and instead returns only a single integer. For 2 classes you also need the “to_categorical” function, as you noted.
Thanks for such a great tutorial,
I wanna know why did you choose the VGGNet, and why this particular version (smallerVggNet)? what are your reasons behind that decision?
I cover my tips, suggestions, and best practices when choosing a CNN architecture and associated hyperparameters inside Deep Learning for Computer Vision with Python — I suggest you start there if you are interested in learning more about training your own custom CNNs.
Hi Adrian,
in this tutorial,we are detecting one object in one image.Is it possible to detect more objects in one image using keras and CNNs ?
and how to detect objects in live video?
You’re actually performing image classification here, not object detection. See this tutorial to help you learn the differences.
Hi Adrian,
Can you please help how to do image classification for live video?
This tutorial covers video classification with Keras.
Hi Adrian! First of all, your blog rocks
I have been training this network to classify super heros from cómic pages. It tends to work fine but i found 2 problems for this aplcation.
1- it missclassify heros that look similar like spiderman and ironman.
2- i dont know if this network could be used to detect multiple classes on the same image (e.g. Detect which heros appear on a given comic page) and to provide a region of interesar where each detection happens.
For the first problem i have trained a series of models that are binary (e.g. Spiderman/ not spiderman. This second category includes fotos of all other heros). This solved the problem but i find it kind of unefficient.
If you find it interesting i would Love to read what you have to suggest!
Best regards
Hello Adrian,
Great tutorial! I have one question. I am currently working on training a model that will aid me in classifying “fullness” of a parking lot floor (Either 0-100% full with a total of 10 classes). The parking spaces are fixed and only appear on the left side of each image in my dataset. Vehicles will always park in only that area. Would it be a bad idea to use Data Augmentation in this situation?
Detecting vehicles (or absence of them) sounds more like an object detection or instance segmentation problem. Is there a reason you are trying to use standard classification here?
Hi Adrian,
thank you so much for sharing these tutorials, they have been incredible helpful so far.
In my case I have 4 different classes of objects, each class has around 150 – 200 images available. I can successfully train the network and receive very accurate results when presenting one of the 4 known objects to the network.
The issue however, if I use images which are not in one of these 4 classes (actually not even remotely similar), the model will always predict the same class and always with 100% confidence.
Could you point me into the right direction how I can avoid false positives with such high confidence?
Best regards
Create a 5th class called “ignore” and fill it with images unrelated to the four other classes. Train your network on those 5 classes.
Hi Adrian,
thanks for sharing these fantastic tutorials. I’m looking into them for a project of mine to determine the size of a cauliflower in the field and I was wandering if classification is the right approach or should I look to something like facial recognition? What do you think?
Is pyimagesearch module available in your pre configured AWS MI instance with smallervggnet?
The “pyimagesearch” module is just meant to keep code tidy and organized (and to show readers proper Python module structure). It’s not meant to be pip-installable. If you download the source code to one of my blog posts, books, or courses you can upload it to the AMI and run it there.
Really Helpful ,
The way you explain the implementation of CNN,I Bet no one can.
I have Already implemented some interesting use cases using this in Automobile Insurance and Retail sector
Thanks a lot Adrian.
Thanks Sachin 🙂
Can I tutorial deploy it to a smartphone?
See this tutorial on Keras and iOS.
Hi, really thanks for the tutorial. How can I deploy a keras model like this to my own website? (I have already owned a hosting and has already accomplished domain name resolution) Thanks a lot for your help!
You mean like a REST API? If so, take a look at this tutorial.
Dear Adrian,
thanks for fantastic blog.
I need to use my classes for example to classify different objects (Tire , Ladder , chain,..)
I tried to do that with the same code but I get only your labels. I need to change this labels with my own classes and labels .My project is to classify underwater objects , so I need to build my own datasets and labels.
Thanks again for your support.
Best Regards,
Assem
If you need help building your own datasets and training your own custom CNNs I would recommend you read Deep Learning for Computer Vision with Python. That book covers dataset structure and custom training in detail.
Hiiii Adrian
I always found your tutorial helpful but I have some doubts regarding creating my own dataset, is it necessary to make the dimension of each image constant with the same value, can’t we just use the original image downloaded from net while labeling the image?
I would recommend you resize each image such that the dimensions are the same. You can of course download the original image, just make sure they are resized before passing them through the CNN.
Can i train my own dataset using this algorithm ? and please explain VGGNET
Yes you can. If you want to learn how to train your own model, including the inner-workings of VGG, I recommend you read Deep Learning for Computer Vision with Python.
Suppose I trained this network using Keras, can I implement this network using a video stream?
Yes, absolutely — see this video classification tutorial. | https://www.pyimagesearch.com/2018/04/16/keras-and-convolutional-neural-networks-cnns/ | CC-MAIN-2020-45 | refinedweb | 20,249 | 65.12 |
%matplotlib inline import matplotlib.pylab as plt
Installation instructions can be found in the README.md file of this repository. Better to use rendered version from GitHub.
In order to be productive you need comfortable environment, and this is what IPython provides. It was started as enhanced python interactive shell, but with time become architecture for interactive computing.
Since the 0.12 release, IPython provides a new rich text web interface - IPython notebook. Here you can combine:
print('I love Python')
I love Python
$$\int_0^\infty e^{-x^2} dx=\frac{\sqrt{\pi}}{2}$$ $$ F(x,y)=0 ~~\mbox{and}~~ \left| \begin{array}{ccc} F''_{xx} & F''_{xy} & F'_x \\ F''_{yx} & F''_{yy} & F'_y \\ F'_x & F'_y & 0 \end{array}\right| = 0 $$
x = [1,2,3,4,5] plt.plot(x);
from IPython.display import YouTubeVideo YouTubeVideo('F4rFuIb1Ie4')
In order to start Jupyter notebook you have to type:
jupyter notebook
You can download them as .zip file:
wget
Unzip:
unzip master.zip
And run:
cd python_for_geosciences-master/ jupyter notebook
You can use question mark in order to get help. To execute cell you have to press Shift+Enter
?
Question mark after a function will open pager with documentation. Double question mark will show you source code of the function.
plt.plot??
Press SHIFT+TAB after opening bracket in order to get help for the function (list of arguments, doc string).
sum(
You can access system functions by typing exclamation mark.
!pwd
/Users/koldunovn/PYHTON/python_for_geosciences
If you already have some netCDF file in the directory and ncdump is installed, you can for example look at its header.
!ncdump -h test_netcdf.nc
netcdf test_netcdf { dimensions: TIME = 1464 ; LATITUDE = 73 ; LONGITUDE = 144 ; variables: float TIME(TIME) ; TIME:units = "hours since 1-1-1 00:00:0.0" ; float LATITUDE(LATITUDE) ; float LONGITUDE(LONGITUDE) ; float New_air(TIME, LATITUDE, LONGITUDE) ; New_air:missing_value = -9999.f ; }
!cdo nyear test_netcdf.nc
2 cdo nyear: Processed 1 variable over 1464 timesteps ( 0.03s )
Get information from OS output to the python variable
nmon = !cdo nmon test_netcdf.nc nmon
['cdo nmon: Processed 1 variable over 1464 timesteps ( 0.03s )', '13']
Return information from Pyhton variable to the SHELL
!echo {nmon[1]}
13
The magic function system provides a series of functions which allow you to control the behavior of IPython itself, plus a lot of system-type features.
list(range(10))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
And find out how long does it take to run it with %timeit magic function:
%timeit list(range(10))
601 ns ± 63.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
Print all interactive variables (similar to Matlab function):
Receive as argument both the current line where they are declared and the whole body of the cell.
%%timeit range(10) range(100)
485 ns ± 24.8 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
Thre are several cell-oriented magic functions that allow you to run code in other languages:
%%bash echo "My shell is:" $SHELL
My shell is: /bin/bash
%%perl $variable = 1; print "The variable has the value of $variable\n";
The variable has the value of 1
You can write content of the cell to a file with %%writefile (or %%file for ipython < 1.0):
%%writefile hello.py #if you use ipython < 1.0, use %%file comand #%%file a = 'hello world!' print(a)
Overwriting hello.py
And then run it:
%run hello.py
hello world!
The %run magic will run your python script and load all variables into your interactive namespace for further use.
In order to get information about all magic functions type:
%magic | http://nbviewer.jupyter.org/github/koldunovn/python_for_geosciences/blob/master/01%20-%20Scientific%20modules%20and%20IPython.ipynb | CC-MAIN-2018-26 | refinedweb | 615 | 67.35 |
In this month's column I'll also be covering:
Axis uses OpenGL on SGI and Win 95/NT, and Mesa on the remaining platforms. The 3dfx accelerated version utilizes the Mesa Voodoo libraries. The Linux version currently has the best coloring/shading; the different OpenGL implementations have quirks that we haven't sorted out yet.
The rendering engine uses a simple stack machine interpreter, and processes a language that has similarities to Lisp, Forth, and Adobe's PostScript. The interpreter is multi-threaded, so objects in the 3D environment can have private namespaces. We are working on a programming manual for the language.
It is also network-ready; you can talk directly to the rendering engine with a TCP/IP connection. The distribution includes source code for an example TclTk program which utilizes the network connection (this is the tool we used to position models within the 3D environment). We will be releasing more complex modelers shortly.
The rendering engine and language interpreter will be the base for our multi-user shared environment application, which we plan to release near the end of July. Environments, and information about positions of other users, will be downloaded via TCP/IP; if you choose to customize your avatar, code for that can be uploaded.
Enjoy, and let me know if you have questions.
Patrick H. Madden
phm@webvision.com
pickle@cs.ucla.edu
or
phm@ikm.com when we get our mail server sorted out.....
...there is a very nifty morphing tool, called xmrm, available at. I played with this a little and it has one of the most professional looking interfaces I've seen in awhile. It's relatively easy to use, at least if you follow the one example morph it provides.
...there is a Web site devoted to explaining how to make MPEG movies?
Take a look at
GVL/Software/mpeg.html to find out more.
A: Well, I don't know of any tools that can take a set of TGA files and directly turn them into an animation on Linux systems. I'm not that familiar with animations yet, but here is what I do know.
First, you have two types of animations you can create (with freely available tools) from a set of raster images: MPEG or an animated GIF. The latter requires the images to be in GIF format (GIF89a, actually). There are two tools for taking the GIF files and turning them into an animation: WhilrGIF and MultiGIF. Both are command line tools and both are fairly easy to use. I like MultiGIF a little more simply because it can create smaller animation using sprites (small images that can overlay the previous image). Understanding how to do this is a little tricky, but not that tough. WhirlGIF simply concatenates the set of GIFs together into an animated sequence. Playing an animated GIF can only be done by Web browsers, although I only know for certain that both Netscape and MSIE support this format. To my knowledge (someone correct me if I'm wrong) there are no "animated GIF players" for Linux.
MPEG is an animation format that I've just started to experiment with.
There is only one command line tool that I'm aware of for creating the
animations - mpeg_encode
- but there are quite a few tools for viewing them (xanim,
MpegTV, mpeg_play,
etc). Creating the animation is done by setting up a text file with
the configuration information needed by mpeg_encode. It then reads
the configuration file, determines what sort of processing is to be done
and takes the input files and creates the MPEG output file. The configuration
can be fairly sophisticated, but I found the default template worked fairly
well with only a few minor modifications. One of those modifications
was to tell mpeg_encode what other tool to use to convert the input files,
which were in TIFF format (rendered from BMRT), into a format that mpeg_encode
could handle. Fortunately, mpeg_encode handles two fairly common
formats: JPEG and PPM/PNM (it actually supports a couple of others,
but these two will be readily recognizable to most users). I used
the NetPBM tool tifftopnm. The TIFF files are converted
on the fly by mpeg_encode as long as you tell it what converter to use.
There is another format called FLI which has an encoder. My understanding is that this format is slowly dying as MPEG gains popularity.
So now that you know what formats you need to put the animation in you might wonder how to get the TGA files into the formats you need. Thats a common question when dealing with both 2D and 3D images, in both animated and static formats. The answer: get either the NetPBM tools. ImageMagick, or ImageAlchemy (the latter being a more sophisticated commercial product). Any of these are valuable tools for your arsenal of image processing since they all perform the often needed task of converting from one format to another. NetPBM is what I currently use, although I don't believe it has a tool for converting JPEG images to other formats (there is an add-on package for NetPBM that handles this, but I don't think the NetPBM package itself has JPEG conversion tools - I could be wrong, its been awhile since I downloaded the package).
So, to summarize how to get your TGA files into an animation:
Reagen Ward wrote:
I come from the world of PHIGS for visualization, and thus can't stand VRML as a supposed data format. I'd love to hear your opinions on why it's not ready for personal use.
Originally I had objected to it due to bandwidth issues. I've learned since then that this may not be as big a limitation as I once thought since VRML provides a language which can be passed between client and server and doesn't (to my knowledge - which admittedly is still somewhat limited) require the actual images to be passed. PHIGS could probably be done this way too, but PHIGS needs a "PHIGS for Dummies" layer slapped on top to make it a little more user friendly.
However, the real limitation right now is processing power. Even
if you pass only descriptions of the objects to render, the end system
still has to be fast enough to render them from the point of view
of the user. This is very CPU intensive. The
average user doesn't have this kind of processing power (have you seen
the new WebTV boxes? They are even slower and
dumber than the average 2 year old PC). This processing could be moved off CPU into some adapter card (maybe a VRML-ready display card), but such technology isn't available yet so its cost would still be (for some time) out of the reach of the average home.
Now it's not unlikely to see VRML in some environs: kiosks in stores or malls (real ones, not Internet Malls) come to mind or any kind of public facility that provides information to users to be browsed at their own pace. These places will have limited point-of-view (like point-of-sale) locations on a local network so bandwidth is not a problem, nor is server capacity (it's known pretty much ahead of time how much activity they're likely to have). The point-of-view boxes can be as powerful as the mall can afford. VRML provides a reasonable return-on-investment for these situations.
But the big money, and money (income, that is) is what drives acceptance, only comes when you can move the technology into the home. Thats what WebTV's are all about - computers for the common man at toaster prices. VRML requires too much processing for the average home, so it's not likely to be a big technology for at least 2-5 years. It depends on if Intel/Sun/HP/etc can find a way to make money producing VRML-toasters.
Hows that?
No Muse next month (September). I'll be at SIGGRAPH and otherwise busy throughout August and just won't have time for it. But I'll be back in October, probably with lots of goodies from SIGGRAPH (or at least I hope I | http://www.tldp.org/LDP/LGNET/issue20/gm.html | CC-MAIN-2014-41 | refinedweb | 1,376 | 60.85 |
Access $router outside vue
How can I access $router in a .js file?
i would try to import the router where you need it:
import router from '/router/index'; //replace with your correct path
Tried that, but the line
router().replace({name: 'logout'})only changes the url, without anything happening on the UI. Things work after I click F5
- Allan-EN-GB Admin last edited by
router().push({name: 'logout'})
Both
router().replace({name: 'logout'})and
router().replace({name: 'logout'})only change the URL, but nothing more happens. The page isn’t changed, no errors are shown in the console. I have to click F5 and then things work (because the URL is changed to the right one)
- rstoenescu Admin last edited by
Use boot files where you have access to the instance of the Router (as param to the default export method).
If you import and call router() you are essentially creating ANOTHER instance of the router, so nothing can actually happen to your app since your app is connected only to the initial Router.
Thank you @rstoenescu for the guidelines. I’m trying to use the router in a vuex action inside of a module. How would I transfer that part of the redirect logic in a boot file?
- metalsadman last edited by metalsadman
@reath follow what @rstoenescu suggested, then import that boot file in your vuex.
edit. something like
// boot/router.js let routerInstance = void 0 export default async ({ router }) => { // something to do routerInstance = router } export { routerInstance } // store/somemodule/action.js import { routerInstance } from 'boot/router' export const someAction = (...) => { ... routerInstance.push('/some-route') }
- rstoenescu Admin last edited by
If you’re using this in a Vuex store file, then it will suffice to access “this.$router”. Just make sure you don’t define your actions with ES6 arrow syntax (because “this” will mean something else as an effect).
export const someAction(...) { //... this.$router...... }
What are your tring to accomlish?
In a scenario, I need to command the routing using electron menu. In this case I use electron function to fire an event and inside vue I can access a ‘bridge’ to listen for electron-initiated events. So user click on a menu and vue router change the page | https://forum.quasar-framework.org/topic/3960/access-router-outside-vue/3 | CC-MAIN-2019-43 | refinedweb | 371 | 65.52 |
Welcome to a Natural Language Processing tutorial series, using the Natural Language Toolkit, or NLTK, module with Python.
The NLTK module is a massive tool kit, aimed at helping you with the entire Natural Language Processing (NLP) methodology. NLTK will aid you with everything from splitting sentences from paragraphs, splitting up words, recognizing the part of speech of those words, highlighting the main subjects, and then even with helping your machine to understand what the text is all about. In this series, we're going to tackle the field of opinion mining, or sentiment analysis.
In our path to learning how to do sentiment analysis with NLTK, we're going to learn the following:. The easiest method to installing the NLTK module is going to be with pip.
For all users, that is done by opening up cmd.exe, bash, or whatever shell you use and typing:
pip install nltk
Next, we need to install some of the components for NLTK. Open python via whatever means you normally do, and type:
import nltk nltk.download()
Unless you are operating headless, a GUI will pop up like this, only probably with red instead of green:
Choose to download "all" for all packages, and then click 'download.' This will give you all of the tokenizers, chunkers, other algorithms, and all of the corpora. If space is an issue, you can elect to selectively download everything manually. The NLTK module will take up about 7MB, and the entire nltk_data directory will take up about 1.8GB, which includes your chunkers, parsers, and the corpora.
If you are operating headless, like on a VPS, you can install everything by running Python and doing:
import nltk
nltk.download()
d (for download)
all (for download everything)
That will download everything for you headlessly.
Now that you have all the things that you need, let's knock out some quick vocabulary:
These are the words you will most commonly hear upon entering the Natural Language Processing (NLP) space, but there are many more that we will be covering in time. With that, let's show an example of how one might actually tokenize something into tokens with the NLTK module.
from nltk.tokenize import sent_tokenize, word_tokenize EXAMPLE_TEXT = "Hello Mr. Smith, how are you doing today? The weather is great, and Python is awesome. The sky is pinkish-blue. You shouldn't eat cardboard." print(sent_tokenize(EXAMPLE_TEXT))
At first, you may think tokenizing by things like words or sentences is a rather trivial enterprise. For many sentences it can be. The first step would be likely doing a simple .split('. '), or splitting by period followed by a space. Then maybe you would bring in some regular expressions to split by period, space, and then a capital letter. The problem is that things like Mr. Smith would cause you trouble, and many other things. Splitting by word is also a challenge, especially when considering things like concatenations like we and are to we're. NLTK is going to go ahead and just save you a ton of time with this seemingly simple, yet very complex, operation.
The above code will output the sentences, split up into a list of sentences, which you can do things like iterate through with a for loop.
['Hello Mr. Smith, how are you doing today?', 'The weather is great, and Python is awesome.', 'The sky is pinkish-blue.', "You shouldn't eat cardboard."]
So there, we have created tokens, which are sentences. Let's tokenize by word instead this time:
print(word_tokenize(EXAMPLE_TEXT))
Now our output is:
['Hello', 'Mr.', 'Smith', ',', 'how', 'are', 'you', 'doing', 'today', '?', 'The', 'weather', 'is', 'great', ',', 'and', 'Python', 'is', 'awesome', '.', 'The', 'sky', 'is', 'pinkish-blue', '.', 'You', 'should', "n't", 'eat', 'cardboard', '.']
There are a few things to note here. First, notice that punctuation is treated as a separate token. Also, notice the separation of the word "shouldn't" into "should" and "n't." Finally, notice that "pinkish-blue" is indeed treated like the "one word" it was meant to be turned into. Pretty cool!
Now, looking at these tokenized words, we have to begin thinking about what our next step might be. We start to ponder about how might we derive meaning by looking at these words. We can clearly think of ways to put value to many words, but we also see a few words that are basically worthless. These are a form of "stop words," which we can also handle for. That is what we're going to be talking about in the next tutorial. | https://pythonprogramming.net/tokenizing-words-sentences-nltk-tutorial/ | CC-MAIN-2022-40 | refinedweb | 756 | 65.62 |
String to boolean Conversion: Sometimes, we obtain the values in string format in Java. Printing the value is no problem (as long as user gets the same output) as it prints the same value as data type prints.
For example, a boolean value is obtained in string format as in command-line arguments or from TextField's getText() method. The string value is to be converted into boolean data type format to use in coding. To convert, casting does not work as string and boolean are incompatable for conversion either implicitly or explicitly. It requires extra effort in coding known as "parsing operation". Parsing operation involves the usage of a wrapper class and parseXXX() method. For string to boolean conversion, it is required Boolean class and parseBoolean() method and explained in the following program.
Parsing Example on String to boolean conversion
public class Conversions { public static void main(String args[]) { String str = "true"; System.out.println("true in String form: " = + str); // printing is no problem boolean b1 = Boolean.parseBoolean(str); // String to boolean conversion if(b1) { System.out.println("Yes converted"); // using in coding } } }
Output screenshot on String to boolean Example
parseBoolean() is a method of wrapper class Boolean which converts string str to boolean b1. Now b1 can be used in coding as in control structures.
Using the same technique, it is possible to convert string to other data types byte, short, int, long, float, double and character also.
Note: The other way boolean to string is also possible.
View all for 65 types of Conversions
2 thoughts on “String to boolean Conversion in Java”
I am taking a Java class and in My Programming Lab the question is:
Assume that an int variable age has been declared and already given a value. the String (S or T or B) that the user types in into a String variable choice that has already been declared and.
ASSUME the availability of a variable, stdin , that references a Scanner object associated with standard input.
My answer was:
choice = stdin.next();
if (choice = “S”){
if (age <= 21)
System.out.println("vegetable juice");
System.out.println("cabernet");}
else if (choice = "T"){
if (age <= 21)
System.out.println("cranberry juice");
System.out.println("chardonnay");}
else if (choice = "B"){
if (age <= 21)
System.out.println("soda");
System.out.println("IPA");}
else
System.out.println("invalid menu selection");
I keep getting a compilation error, stating that:
CTest.java:11: error: incompatible types
if (choice = "S"){
^
required: boolean
found: String
CTest.java:15: error: incompatible types
else if (choice = "T"){
^
required: boolean
found: String
CTest.java:19: error: incompatible types
else if (choice = "B"){
^
required: boolean
found: String
3 errors
Please Help!!!
Your problem is here, check it.
choice = “S”
must be as
if(choice.equals(“S”)) | https://way2java.com/java-lang/string-to-boolean-conversion-in-java/ | CC-MAIN-2022-33 | refinedweb | 460 | 57.77 |
In this Tutorial, we will learn about the Java String indexOf() Method and its Syntax and Programming Examples to find the Index of Characters or Strings:
We will explore the other options that are associated with the Java indexOf() method and it’s usage along with simple programming examples.
Upon going through this tutorial, you will be able to understand the different forms of the String indexOf() Java method and you will be comfortable in using it in your own programs.
=> Check ALL Java Tutorials Here.
What You Will Learn:
Java String indexOf Method
As the name suggests, a Java String indexOf() method is used to return the place value or the index or the position of either a given character or a String.
The return type of the Java indexOf() is “Integer”.
Syntax
The syntax is given as int indexOf(String str) where str is a String variable and this will return the index of the first occurrence of the str.
Options
There are basically four different options/variations of using the Java indexOf() method.
- int indexOf(String str)
- int indexOf(String str, int StartingIndex)
- int indexOf(int char)
- int indexOf(int char, int StartingIndex)
As discussed earlier, the Java indexOf() method is used to return the place value of either a string or a character of the String. The indexOf() method comes up with two options each i.e. for String as well as the character.
We have already discussed the first variation and the second variation of Strings and characters that come up with the StartingIndex. This Starting Index is the index from where the search for the character index has to be started.
Finding The Index Of A Substring
This is the simplest form of the Java indexOf() method. In this example, we are taking an input String in which we are going to find the index of a substring that is a part of the main String.
public class indexOf { public static void main(String[] args) { String str = "Welcome to Softwaretestinghelp"; //Printing the index of a substring "to" System.out.println(str.indexOf("to")); } }
Output:
Finding The Index Of A Character
In this example, we will see how the StartingIndex works when we try to find the index of the character from the main String. Here, we have taken an input String in which we are specifying the two different StartingIndex and see the difference too.
The first print statement returns 1 as it is searching from the 0th index whereas the second print statement returns 6 as it is searching from the 5th index.
public class indexOf { public static void main(String[] args) { String str = "Welcome"; //returns 1 as it is searching from the 0th index System.out.println(str.indexOf("e", 0)); //returns 6 as it is searching from the 5th index. System.out.println(str.indexOf("e", 5)); } }
Output:
Scenarios
Scenario 1: What happens when we try to find the index of a character that is not available in the main String.
Explanation: Here, we have initialized a String variable and we are trying to get the index of the character as well as a substring which is not available in the main String.
In this type of scenario, the indexOf() method will always return -1.
public class indexOf { public static void main(String[] args) { String str = "Software Testing"; /* * When we try to find the index of a character or String * which is not available in the Main String, then * it will always return -1. */ System.out.println(str.indexOf("X")); System.out.println(str.indexOf("x")); System.out.println(str.indexOf("y")); System.out.println(str.indexOf("z")); System.out.println(str.indexOf("abc")); } }
Output:
Scenario 2: In this scenario, we will try to find the last occurrence of a character or substring in a given String.
Explanation: Here, we are going to be familiar with the additional method of the Java indexOf() method. The lastIndexOf() method is used to find the last occurrence of a character or substring.
In this example, we are fetching the last index of the character ‘a’. This can be accomplished by the Java indexOf() method as well as the lastIndexOf() method.
The lastIndexOf() method is easy to use in this kind of scenario as we do not require any StartingIndex to be passed. While using the indexOf() method, you can see that we have passed the StartingIndex as 8 from where the index will start and continue to find the occurrence of ‘a’.
public class indexOf { public static void main(String[] args) { String str = "Saket Saurav"; /* * The first print statement is giving you the index of first * occurrence of character 'a'. The second and third print * statement is giving you the last occurrence of 'a' */ System.out.println(str.indexOf("a")); System.out.println(str.lastIndexOf("a")); System.out.println(str.indexOf("a", 8)); } }
Output:
Frequently Asked Questions
Q #1) How to find the length of a string in Java without using length method?
Answer: Java has an inbuilt method called length() that is used to find the length of a String. This is the standard way to find the length. However, we can also find the length of a String using the lastIndexOf() method but it cannot be used while we are providing input through the console.
Let’s see the below example where we have used both the methods to find the length of a String.
public class indexOf { public static void main(String[] args) { String str = "Software Testing Help"; /* Here we have used both length() and lastIndexOf() method * to find the length of the String. */ int length = str.length(); int length2 = str.lastIndexOf("p"); length2 = length2 + 1; // Printing the Length using length() method System.out.println("Length using length() method = " + length); // Printing the Length using lastIndexOf() method System.out.println("Length using lastIndexOf() method = " + length2); } }
Output:
Q #2) How to find the index of a dot in Java?
Answer: In the below program, we will find the index of ‘.’ that should be a part of the String. Here, we will take an input String that contains two ‘.’ and then with the help of indexOf() and lastIndexOf() methods, we will find the place value of the first and last dot ‘.’.
public class indexOf { public static void main(String[] args) { String str = "saket.saurav8@abc.com"; /* Here, we are going to take an input String which contains two ‘.’ * and then with the help of indexOf() and lastIndexOf() methods, * we will find the place value of first and the last dot '.' */ System.out.println(str.indexOf('.')); System.out.println(str.lastIndexOf('.')); } }
Output:
Q #3) How to get the value of elements of an array in Java?
Answer:
Given below is the programming example to extract the elements of an array.
Elements start from arr[0], thus when we print arr[0]… till the last index, and we will be able to retrieve the elements specified at a given index. This can be done either by specifying the index number of the element or by using a loop.
public class indexOf { public static void main(String[] args) { String arr[] = {"Software", "Testing", "Help"}; /* Elements start from arr[0], hence when we * print arr[0]... till the last index, we will * be able to retrieve the elements specified at a * given index. This is also accomplished by using For Loop */ System.out.println(arr[0]); System.out.println(arr[1]); System.out.println(arr[2]); System.out.println(); System.out.println("Using For Loop: "); for (int i=0; i< arr.length; i++) { System.out.println(arr[i]); } } }
Output:
Q #4) How to get the index of a list in Java?
Answer: In the below program, we have added some elements and then we have tried to find the index of any of the elements present in the list.
import java.util.LinkedList; import java.util.List; public class indexOf { public static void main(String[] args) { /* Added a few elements in the list and then * found the index of any of the elements */ List<Integer> list = new LinkedList<>(); list.add(523); list.add(485); list.add(567); list.add(999); list.add(1024); System.out.println(list); System.out.println(list.indexOf(999)); } }
Output:
Q #5) How to get the second last index of the string in Java?
Answer: Here, we have found the second last index as well as the second last character occurring in the String.
As we have to find the second last character, we have subtracted 2 characters from the length of the String. Once the character is found, we have printed using chars[i] and the index of the second last character as well.
public class indexOf { public static void main(String[] args) { String str = "Software Testing Help"; char[] chars = str.toCharArray(); /* Since, we have to find the second last character, we have subtracted 2 characters * from the length of the String. Once the character is found, we have printed * using chars[i] and also the index of the second last character. */ for(int i=chars.length-2; i>0;) { System.out.println("The second last character is " + chars[i]); System.out.println("The index of the character is " + str.indexOf(chars[i])); break; } } }
Output:
Conclusion
In this tutorial, we understood the Java String indexOf() method in detail along with the options that are associated with the Java indexOf() method.
For better understanding, this tutorial was explained with the help of different scenarios and FAQs along with adequate programming examples on each usage to explain the ways of using the indexOf() and lastIndexOf() methods.
=> Watch Out The Simple Java Training Series Here. | https://www.softwaretestinghelp.com/java-string-indexof-method/ | CC-MAIN-2021-17 | refinedweb | 1,598 | 60.95 |
In one of my modules, dealing with external resources, it would make sense to know if the module was loaded at compile time with a
use MyModule;
[download]
require MyModule;
[download]
Anyway, read
perldoc -f use,
perldoc -f require,
perldoc -f import,
and
perldoc -f caller.
caller will tell you what you need to know.
How called was &I? further explores this (I being a function, just like import).
update: silly rabbitBiker, tricksBEGIN blocks will always get executed, observe
#moo.pm
package moo;
BEGIN {
warn "moo ".join'',caller(1);
}
sub import {
warn "import ".join'',caller(1);
}
1;
__END__
E:\new>perl -e"use moo;"
moo mainmoo.pm5(eval)000 at moo.pm line 4.
import mainmoo.pm1main::BEGIN100 at moo.pm line 8.
E:\new>perl -e"require moo;"
moo mainmoo.pm5(eval)000 at moo.pm line 4.
[download]
Sorry to be so "silly", but...
If my module get's conditionally require'd inside an eval() then the BEGIN block in my module won't be executed when the application starts it's execution which is what I'm trying to verify.
Update:
I read below: "...the moment it is completely defined ..." which corresponds to my understanding.
Doesn't this mean that an eval("require MyModule") will make sure that MyModule get's parsed in this very moment instead of during compilation of the main script?
If this is so, then any BEGIN block in MyModule will be parsed and executed at eval() time. No?
If this is not so, but all modules are parsed at compile time even if they are only conditionally required in an eval() statement, how can it be that modules that are not at all available on the system will not create a compilation error as long as the condition is not met and the module is not being actually required?
And how does perl handle when I read in from a database what module to conditionally require in an eval() statement?
(Where the name of the module does not ever appear in my code, but only in a variable brought in from the database.)
from perldoc perlmod
Four special subroutines act as package constructors and destructors.
These are the BEGIN, CHECK, INIT,.
Inside an END subroutine, $? contains the value that the program is
going to pass to exit(). You can modify $? to change the exit
value of the program. Beware of changing $? by accident (e.g. by
running something via system).
Similar to BEGIN blocks, INIT blocks are run just before the
Perl runtime begins execution, in ``first in, first out'' (FIFO) order.
For example, the code generators documented in perlcc make use of
INIT blocks to initialize and resolve pointers to XSUBs.
Similar to END blocks, CHECK blocks are run just after the
Perl compile phase ends and before the run time begins, in
LIFO order. CHECK blocks are again useful in the Perl compiler
suite to save the compiled state of the program.
When you use the -n and -p switches to Perl, BEGIN and
END work just as they do in awk, as a degenerate case.
Both BEGIN and CHECK blocks are run when you use the -c
switch for a compile-only syntax check, although your main code
is not.
Is it possible to find out where BEGIN was called from by the function caller() ? If yes, there may be a value (filename from (caller())[1] and line number from (caller())[2] ) and you could parse the file around the line. But I don't know if this works with BEGIN-Blocks.
Here another dirty idea for longer running programs: there exists a perl variable called $^T (=$BASETIME) which contains the unix epoch time when the program was started. If the module was "used", this will be about the same as a timestamp in the Module's BEGIN-block, and may be earlier if the module is required later at runtime.
Sorry!
Best regards,
perl -e "s>>*F>e=>y)\*martinF)stronat)=>print,print v8.8.8.32.11.32"
import is only called from use.
Excelent point, but only if my module would be a 'traditional' module using the Exporter.
This is an OO module. No export, no import. (Unless I've misunderstood something fundamental again. ;-)
Sorry I didn't mention that in my original post.
Matt
package amIBeingUsed; # in amIBeingUsed.pm
if(defined( ${ caller()."::RUNTIME_OK" } ))
{
# the flag is set
print "what took you so long to call me?";
}
else
{
# still compiling
print "eurgh, i feel so cheap!";
}
sub import
{
print " slap!!!\n";
}
1;
[download]
use strict;
my $module = "amIBeingUsed";
our $RUNTIME_OK = 1;
# possible scenarios
use amIBeingUsed; # "eurgh, i feel so cheap! slap!!!"
eval "use $module"; # "what took you so long to call me? slap!!!"
require amIBeingUsed; # "what took you so long to call me?"
eval "require $module"; # "what took you so long to call | http://www.perlmonks.org/?node_id=218006 | CC-MAIN-2015-48 | refinedweb | 807 | 75.91 |
On Mon, 2008-09-15 at 20:40 +0200, Michael Niedermayer wrote: > On Mon, Sep 15, 2008 at 10:58:06AM -0700, Baptiste Coudurier wrote: > > Hi, > > > > Michael Niedermayer wrote: > > > On Mon, Sep 08, 2008 at 02:54:14PM -0700, Baptiste Coudurier wrote: > > >> Hi, > > >> > > >> $subject, to use dnxhd raw essences. > > > [...] > > >> Index: libavformat/raw.c > > >> =================================================================== > > >> --- libavformat/raw.c (revision 15275) > > >> +++ libavformat/raw.c (working copy) > > >> @@ -487,6 +487,15 @@ > > >> } > > >> #endif > > >> > > >> +static int dnxhd_probe(AVProbeData *p) > > >> +{ > > >> + static const uint8_t header[] = {0x00,0x00,0x02,0x80,0x01}; > > >> + if (!memcmp(p->buf, header, 5)) > > >> + return AVPROBE_SCORE_MAX; > > >> + else > > >> + return 0; > > >> +} > > > > > > Can more than that be used for a more reliable probe? > > > I mean yes its 5 bytes but they are all 0 except 3 bits, thus this might > > > be more common in real files than expected in random data. > > [...] > > Besides, after these 5 bytes, I'd need to go far to fetch interesting > > data like cid, Im not sure. > > well its not that important, we can always leave it until someone actually > finds some misdetection. I wasnt aware that these 5 bytes where the only > easy checkable thing ... > There's the 32bit end-of-frame marker 0x600DC0DE but that's probably not helpful here, but might be useful in the dnxhd_find_frame_end() function of libavcodec/dnxhd_parser.c (also part of the patch). Stuart Cunningham | http://ffmpeg.org/pipermail/ffmpeg-devel/2008-September/057808.html | CC-MAIN-2013-20 | refinedweb | 216 | 60.95 |
The Linux Foundation
Vector
Compex Systemhaus GmbH
The migration of Java EE to the Eclipse Foundation has been an enormous effort by the Eclipse Foundation staff and the many contributors, committers, members, and stakeholders that are participating.
In a recent blog post by Mike Milinkovich, the Eclipse Foundation's Executive Director, he outlines and summarizes the progress to date, as well as the implications of the recent agreement between Eclipse and Oracle on Jakarta EE and use of Java trademarks and the javax namespace.
For more information, check out the blog.
We are proud to announce that our 2019 IoT Developer Survey results are now available! In February & March 2019, we conducted the fifth annual IoT Developer Survey and 1,717 responses were received.
Thank you to all who contributed to this initiative and helping us gain insight into IoT developer communities worldwide! This annual survey is intended for our IoT members to continuously learn key trends that are happening in this ever-changing world we call the Internet of Things.
For more insight on the key findings, read Frédéric Desbiens's blog!
The 2019 program committee has been announced, and the group is hard at work drafting the program tracks for this year’s event. Submissions open on May 20.
Sponsor slots are filling up. Be sure to reserve your spot soon; early-bird pricing for sponsorship ends at the end of June.
We look forward to seeing the community at EclipseCon Europe October 21 - 24 in Ludwigsburg.
We are ramping up for another Eclipse IoT Day next week in Santa Clara! Details on the keynote speakers, schedule and registration can be found here.
This event is co-located with IoT World 2019, the world’s biggest IoT event, happening May 13-16. Use promo code ECLIPSE20 for 20% off and be part of over 12,500 #IOT professionals as they gather at the intersection of industries and #IOT innovation and follow us on Twitter at @EclipseIoT to see what we’re up to!
To help our members and community understand the ways they can leverage the Eclipse brand, and to better ensure the consistency of our corporate brand identity, we have created a new Brand Guidelines. This style guide outlines usage standards to ensure that the Eclipse Foundation logo is instantly and consistently recognizable however and wherever it is used by Foundation members and the community.
We welcome feedback on the guidelines and any other ways we can help you leverage the various assets of the Eclipse Foundation.
Based on feedback received, we have developed a comprehensive membership prospectus to enable prospective members to learn about Eclipse Foundation membership. The document highlights our membership structure, our unique value proposition, and the process of becoming a member. It is posted on the eclipse.org Membership page here.
Also based on feedback received, and to reflect our updated brand and strategic focus areas, we have made enhancements to the Eclipse Foundation Corporate Overview presentation. As a reminder, this presentation is made available for members to use, in whole or in part, when explaining what Eclipse does, and explaining their relationship with Eclipse.
It is available on the Eclipse Membership page here.
In case you missed the live streamed event, watch the April 2019 Quarterly Members Meeting meeting here on YouTube.
After 15 years at 102 Centrepointe Drive in Ottawa, the Eclipse Foundation has moved into our new office space. We are now located at 2934 Baseline Road, Suite 202, Ottawa, ON, K2H 1B2. Our phone number remains the same at +1.613.224.9461. Please update our address in your records.
If you are in the area, please drop by and say hello. We would certainly welcome the opportunity to show you our amazing new digs!
Email us to take advantage of the opportunity to promote your Eclipse training or other Eclipse event on the Events Map!
Eclipse IoT Day at IoT World
May 13, 2019
Santa Clara, CA
SUMO User Conference
May 13-15, 2019
Berlin, Germany
Bosch ConnectedWorld 2019
May 15-16, 2019
Berlin, Germany
Open seminar with the LIM-IT project
May 20, 2019
Skövde, Sweden
KubeCon + CloudNativeCon Europe
May 20 – 23, 2019
Barcelona, Spain
OSS 2019
May 26-27, 2019
Montreal, Quebec | https://www.eclipse.org/community/newsletter/2019/2019May.html | CC-MAIN-2020-10 | refinedweb | 706 | 60.55 |
I wanna to run cmd.exe to open url in default browser
objective: create url in python script; load page in default browser in win10 from npp python script.
status: aint got no satisfaction
code frag: I ran the following from console input area at bot of console one line at a time
x=r’cmd.exe start https:yahoo.com’
console.write(x) #show x on console
y=console.run(x)
console.write(y) #show y on console
console output:
Python 2.7.6-notepad++ r2 (default, Apr 21 2014, 19:26:54) [MSC v.1600 32 bit (Intel)]
Initialisation took 31ms
Ready.
x=r’cmd.exe start https:yahoo.com’
console.write(x) #show x on console
cmd.exe start https:yahoo.com>>> y=console.run(x)
Microsoft Windows [Version 10.0.17134.706]
C:\Program Files (x86)\Notepad++>>>> console.write(y) #show y on console
help: what am I doing wrong?
start:http:yahoo.com #launchs yahoo in a new tab in my browser from a cmd window
- Alan Kilborn last edited by
Not sure what the end goal truly is. If the goal is simply to open a url in a browser there are better ways. Python has process control commands, why not use them rather than trying some awful convolution with CMD.EXE?
@Alan Kilborn
my end goal is to simply launch a uri(constructed from text in my doc) in my default browser as a new tab I’m a python newbie.so if there are good, simple python ways to do what I want, I obviously don’t know where to start. I’m happy to follow any advice/pointers you’re ready to offer. let me know If you would like more backgound; the yahoo uri is just my quick test case, obviously does not work. Thanks in Advance,
- Ekopalypse last edited by Ekopalypse
within the script you might do
import webbrowser webbrowser.open(r'')
Just a note, if you are using PythonScript plugin you have to use python2 syntax.
Migration to python3 has not be done yet. and to add code like this to the forum
here, you can type
~~~ YOUR CODE ~~~
- PeterJones last edited by PeterJones
@Raymond-Lew said:
help: what am I doing wrong?
I’ll assume that the forum formatting was what deleted your
//, because what rendered as
https:yahoo.comwould never work, since it’s really.
Notice you said "
start:http:yahoo.com#launchs yahoo in a new tab in my browser from a cmd window". And then you didn’t pass “start” to the
console.run()command? That was part of your problem. Unfortunately,
startis not an executable that
console.run()can see. Instead, you have to tell
console.run()to launch cmd.exe, and tell cmd.exe to launch start, and tell start to launch the URL.
The following sequence of interactive commands opened three tabs of yahoo.com in my default browser:
>>> console.run('cmd /c start') 0 >>> url = r'' >>> console.run('cmd /c start '+url) 0 >>> console.run('cmd /c start {}'.format(url)) 0
@kopalypse
Thank you for the webbrowser suggestion. I tried it and it gets me 90% to my objective.
on my PC It lauches a window in internet exploror 11, not my default browser. Thank you for the suggestion and I will look deeper down this path to see if I can change this default.
@ PeterJones
your code samples and comments are perfect guidance for my knowledge level.
I have much to learn, thank you all for your responses. | https://community.notepad-plus-plus.org/topic/17439/i-wanna-to-run-cmd-exe-to-open-url-in-default-browser | CC-MAIN-2020-05 | refinedweb | 589 | 77.33 |
User interface
Here at RoboFont we are not too strict concerning user interface components. If you want to use a series of checkboxes to build an interactive piano keyboard, go for it. Just be aware that checkboxes were probably not designed with that application in mind.
Consider that the user interface components you can access from the
vanilla module are distributed by Apple along with some guidelines. If you want to move away from convention, we suggest you to do it consciously.
Code
mojo vs. lib
We recommend using tools from the
mojo module rather than
lib
# this has been deprecated: from lib.tools.defaults import getDefault # use this instead: from mojo.UI import getDefault
also:
from mojo.pens import decomposePen
There are two main reasons for following this recommendation:
mojoediting functions follow the
fontPartsAPI, so they are easier to use for scripter. Differently, the tools in the
libmodule are more defcon-oriented, so they use another API with a different logic
- the
libmodule is not a publicly documented API, so it could change over time without explicit warning
Move to Merz & Subscriber
We really encourage developers to move to
Merz and
Subscriber. These new modules increase users experience quality and they make your tool easier to develop.
Be kind with user’s data
Don’t do funky things with user’s data! Before uploading extensions to Mechanic, the RoboFont team will check the extension for malicious operations and suggest improvements to the user experience.
Code style
Code style can be a controversial topic of debate (remember tabs vs spaces?) between programmers. There is no strict rule to follow, just be aware that there is well established Python Style Guide out there. | https://doc.robofont.com/documentation/topics/building-extensions/ | CC-MAIN-2021-39 | refinedweb | 282 | 54.12 |
This article shows you how to calculate the variance of a given list of numerical inputs in Python.
In case you’ve attended your last statistics course a few years ago, let’s quickly recap the definition of variance: it’s the average squared deviation of the list elements from the average value.
So, how to calculate the variance of a given list in Python?.
Let’s have a look at both methods in Python code:
# 1. With External Dependency import numpy as np lst = [1, 2, 3] var = np.var(lst) print(var) # 0.6666666666666666 # 2. W/O External Dependency avg = sum(lst) / len(lst) var = sum((x-avg)**2 for x in lst) / len(lst) print(var) # 0.6666666666666666
1. In the first example, you create the list and pass it as an argument to the
np.var(lst) function of the NumPy library. Interestingly, the NumPy library also supports computations on basic collection types, not only on NumPy arrays. If you need to improve your NumPy skills, check out our in-depth blog tutorial.
2. In the second example,.
Both methods lead to the same output.
Puzzle: Try to modify the elements in the list so that the variance is 1.0 instead of 0.66666666666 in our interactive shell:
This is the absolute minimum you need to know about calculating basic statistics such as the variance in Python. But there’s far more to it and studying the other ways and alternatives will actually make you a better coder. So, let’s dive into some related questions and topics you may want to learn!
Variance in Python Pandas
Want to calculate the variance of a column in your Pandas DataFrame?
You can do this by using the
pd.var() function that calculates the variance!
Variance in NumPy.var([1, 2, 3]) print(np.average(a)) # 0.6666666666666
Python List Variance Without NumPy
Want to calculate the variance of a given list without using external dependencies?
Calculate the average as
sum(list)/len(list) and then calculate the variance in a generator expression.
avg = sum(lst) / len(lst) var = sum((x-avg)**2 for x in lst) / len(lst) print(var) # 0.6666666666666666.
Python List Standard Deviation
Standard deviation is defined as the deviation of the data values from the average (wiki). It’s used to measure the dispersion of a data set. You can calculate the standard deviation of the values in the list by using the statistics module:
import statistics as s lst = [1, 0, 4, 3] print(s.stdev(lst)) # 1.8257418583505538
An alternative is to use NumPy’s
np.std(lst) method.
Python List Median
What’s the median of a Python list? Formally, the median is “the value separating the higher half from the lower half of a data sample” (wiki).
How to calculate the median of a Python list?
- Sort the list of elements using the
sorted()built-in function in Python.
- Calculate the index of the middle element (see graphic) by dividing the length of the list by 2 using integer division.
- Return the middle element.
Together, you can simply get the median by executing the expression
median = sorted(income)[len(income)//2].
Here’s the concrete code example:
income = [80000, 90000, 100000, 88000] average = sum(income) / len(income) median = sorted(income)[len(income)//2] print(average) # 89500.0 print(median) # 90000.0
Related tutorials:
Python List Mean
The mean value is exactly the same as the average value: sum up all values in your sequence and divide by the length of the sequence. You can use either the calculation
sum(list) / len(list) or you can import the
statistics module and call
mean(list).
Here are both examples:
lst = [1, 4, 2, 3] # method 1 average = sum(lst) / len(lst) print(average) # 2.5 # method 2 import statistics print(statistics.mean(lst)) # 2.5
Both methods are equivalent. The
statistics module has some more interesting variations of the
mean() method (source):
These are especially interesting if you have two median values and you want to decide which one to take.
Python List Min Max
There are Python built-in functions that calculate the minimum and maximum of a given list. The
min(list) method calculates the minimum value and the
max(list) method calculates the maximum value in a list.
Here’s an example of the minimum, maximum and average computations on a Python list:
import statistics as s lst = [1, 1, 2, 0] average = sum(lst) / len(lst) minimum = min(lst) maximum = max(lst) print(average) # 1.0 print(minimum) # 0 print(maximum) # 2
Where to Go From Here
Summary:. | https://blog.finxter.com/how-to-get-the-variance-of-a-list-in-python/ | CC-MAIN-2022-21 | refinedweb | 772 | 57.37 |
Well,.
Matthew wrote:
>>I am in no way trying to attack you. I am just pointing out that C and
>>C++ breeds bad programming practice, and we need protection from them.
> [snip]
> Bottom line: if you're a good engineer, you're a good engineer. If you're
> not, you're not. The language used won't affect this truth. And avoiding
> peaking inside abstractions won't help you become one.
I think you didn't get his point: he's not worried that /he/ will misuse
pointers, he's worried that _his colleagues_ will.
>.
By implementation detail, are you speaking to it nulling the pointer? I
was pretty sure that was in the spec, and not in the implementation.
Delete is needed if you ever want to immediately call a destructor. If
used wisely, it can also decrease the memory usage of your software, and
reduce garbage collection runs (if the GC won't run unless there's more
than X to collect.)
Overriding new and delete would definitely fit into the same class as
pointers, recursion, casting, != in fors, and delete. They're all scary.
-[Unknown]
> Chris Miller wrote:
>> On Mon, 13 Feb 2006 00:26:48 -0500, nick <nick.atamas@gmail.com> wrote:
>>
>>> Now you're talking crazy talk. Throws declarations may be a bad idea - I
>>> agreed after having read up on it. I have yet to hear a good reason why
>>> the unsafe keyword or some other safeguard against dangerous pointer
>>> code is a bad idea.
>>>
>> Then would 'delete' be 'unsafe'? Even though it nulls the reference,
>> other places may still be referencing it, hence making it unsafe.
>
> That seems to be an implementation detail. However, my immediate
> reaction is that delete probably should be unsafe; however, I am not
> sure. It all depends on how much it is needed for mainstream software
> development and how much damage it tends to cause.
>
> Of course, if you are talking about overriding new and then calling
> delete, that's a different story. By allocating memory manually you are
> preventing a good garbage collector from optimizing your heap, so you
> should be avoiding that in most cases.
>
> The upshot of using "unsafe" is that all code that messes with the
> memory manually would get marked unsafe. So, someone working on OS
> features may end up having to put an "unsafe:" at the top of every file
> and compiling with the --unsafe flag (or something to that effect). It
> seems like a small price to pay for preventing amateurs from screwing up
> your code.
>
> It seems to me that most people who write code don't need pointers. Both
> D and C++ are languages that provide high-level and low-level access.
> You are going to get both experts who need the pointers and amateurs who
> don't need them.
>
> Both Bjarne and Matthew seem to think that people should just "learn to
> code well". Despite admitting that most coders are not experts, Bjarne says:
>
> "The most direct way of addressing the problems caused by lack of type
> safety is to provide a range-checked Standard Library based on
> statically typed containers and base the teaching of C++ on that".
> <>
>
> I must disagree. There are too many people to teach. In some cases it is
> a lot easier to modify a language than to teach everyone not to use a
> feature. This may be one of those cases. I think experts tend to forget
> that a language is there to help programmers develop software and to
> reduce chances of human error.
Why don't you give them access to a scripting language? Perhaps
something like Python/Ruby or even DMDScript?
If performance is an issue, just make sure the scripting language
doesn't allow eval (which is so much more evil than pointers, by the
way) and you should be able to convert easily.
-[Unknown]
> Note: I did a search for this and didn't come up with any threads. If it
> has been discussed before... my apologies.
>
>
> Recently I introduced D to a friend of mine (a C.S. grad student at
> Purdue). His reaction was the usual "wow, awesome". Then he became
> concerned about pointer safety. D allows unrestricted access to pointers.
>
> I will skip the "pointers are unsafe" rigmarole.
>
> Should D provide something analogous to the C# unsafe keyword?
On Sun, 12 Feb 2006 22:33:25 -0800, Unknown W. Brackets wrote:
>> Most programmers are amateurs; you're not going to change that.
More indication that we could really do with a 'lint' program for D. It
could warn about pointer usage too.
--
Derek
(skype: derek.j.parnell)
Melbourne, Australia
"Down with mediocracy!"
13/02/2006 5:44:24 PM
Pointer problems are notoriously difficult to track. Pointers are a
feature that is not necessary in 90% of production code. Hey, Joel
called them DANGEROUS. (I'm going to use that one a lot now.)
My example demonstrates a potential error that, if occurs in a library
that you don't have source for, will cause you hours of grief. My
example was carefully constructed. In it an object was passed in using
the /in/ keyword. That should guarantee that my copy of the object
doesn't change. If you are saying it is OK for it to change, then you
are basically saying that the /in/ keyword is useless (well, not really
useless but almost). I don't think that's cool.
Unknown W. Brackets wrote:
> What's going to stop them from making other mistakes, unrelated to
> pointers? For example, the following:
>
> void surprise(in char[] array)
> {
> ubyte[100] x = cast(ubyte[100]) array;
> array[99] = 1;
> }
>
> This will compile fine, and uses zero pointers. It's exactly the same
> concept, too.
No, it won't compile. Maybe I have a different version of dmd, but I get
this:
main.d(3): e2ir: cannot cast from char[] to ubyte[100]
Try it yourself.
The rest of these aren't really pointer bugs. So, if you want to try a
slippery slope and argue that all of programming is unsafe, be my guest.
It isn't particularly productive though. (Sorry, I am getting cranky;
it's late.)
Here's another one:
>
> void surprise(in int i)
> {
> if (i == 0 || i > 30)
> return i;
> else
> return surprise(--i);
> }
>
> Oops, what happens if i is below 0? Oh, wait, here's another common
> mistake I see:
>
> for (int i = 0; i != len; i++)
> {
> ...
> }
>
> What happens if len is negative? I've seen this happen, a lot, in more
> than a few different people's code. They weren't stupid, you're right,
> but it did happen.
>
> So do we mark != in fors as "unsafe"? Recursion too? And forget
> casting, any casting is unsafe now as well?
>
> Seems to me like you're going to end up spending longer dealing with
> their problems, if they think they can use pointers but really can't,
> than you would just reviewing their darn code.
>
> Oh wait, it's only open source where you do that "code review" thing.
> Otherwise it's considered a waste of time, except in huge corporations.
> Why bother when "unsafe" can just fix everything for you like magic?
>
> Valentine's day is coming up... good thing there are flowers, they just
> fix everything too. I can be a jerk and all I need are flowers, right?
> Magic.
>
> -[Unknown]
Unknown W. Brackets wrote:
> Well,.
That's an easy one. You can't do unsafe things without wrapping your
code in the unsafe keyword. That's fairly easy to add, if you ask me.
However, when that amateur gets the compiler error, he/she will look it
up. Once they do, there will be a big notice "DANGER, USE THIS INSTEAD".
I work with a lot of EEs who only had one or two programming courses.
They get a job mainly based on their hardware architecture knowledge.
Now they have I have to work with them and write a hardware simulator.
Oh, I don't know if you realize this, but essentially removed
/in/out/inout from the D spec with my example; please go read it.
If you think that people are going to use the language the RIGHT way
when there is such a tempting wrong way, I suggest you look at C++ and
its operator overloading.
Andrew Fedoniouk wrote:
>>.
I didn't say I had a solution, I just said I have a problem. The
"unsafe" thing is just some syntax that looked pretty cool in C#.
If c-style pointers are left the way they are now, you might as well not
have in/out/inout parameters. To save you from reading the rest of the
thread, here is an example:
CODE:
-----
import std.stdio;
class A
{
private int data[];
public this()
{
data.length = 10;
}
public void printSelf()
{
writefln("Data: ", this.data);
}
}
void surprise(in A a)
{
byte *ap = cast(byte *)(a);
ap[9] = 5;
}
int main()
{
A a = new A();
a.printSelf();
surprise(a);
a.printSelf();
return 0;
}
OUTPUT:
-------
Data before surprise: [0,0,0,0,0,0,0,0,0,0]
Data after surprise:
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4287008,0,2004,216,1245184,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8855552,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,8855680,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8855808,
<..SN]
Derek Parnell wrote:
> On Sun, 12 Feb 2006 22:33:25 -0800, Unknown W. Brackets wrote:
>
>>> Most programmers are amateurs; you're not going to change that.
>
> More indication that we could really do with a 'lint' program for D. It
> could warn about pointer usage too.
>
>
A lint-like tool may be the way to go. However, there definitely need to
be an in-language solution to the /in/ parameter problem. That seems to
be unacceptable (see my previous posts for the details).
There is a lint-like project for Java called Find Bugs. Bill Pugh at
UMCP is leading it. I happen to know Dr. Pugh; he taught one of my
courses and sponsored my senior C.S. project. If someone decides to work
on a lint-like tool, I will be happy to introduce them to Dr. Pugh. | http://forum.dlang.org/thread/dsobph$2mdo$1@digitaldaemon.com?page=3 | CC-MAIN-2014-10 | refinedweb | 1,793 | 75.71 |
A sink block which logs its input to memory. More...
#include <drake/systems/primitives/signal_logger.h>
A sink block which logs its input to memory.
This data is then retrievable (e.g. after a simulation) via a handful of accessor methods. This class essentially holds a large Eigen matrix for data storage, where each column corresponds to a data point. This system saves a data point and the context time whenever its Publish() method is called.
Instantiated templates for the following kinds of T's are provided:
Construct the signal logger system.
Access the logged data.
Returns the only input port.
Access the (simulation) time of the logged data.
Sets the publishing period of this system.
See LeafSystem::DeclarePeriodicPublish() for details about the semantics of parameter
period. | http://drake.mit.edu/doxygen_cxx/classdrake_1_1systems_1_1_signal_logger.html | CC-MAIN-2018-43 | refinedweb | 127 | 52.87 |
In this tutorial, I am going to write a program that will check whether a number is a Perfect Number or Not.
After completing this tutorial you will be able to understand:
- Write a Program to check a number is a Perfect Number or not?
What Is Perfect Number?
In number theory, a perfect number is a positive integer that is equal to the sum of its proper positive divisors, that is, the sum of its positive divisors excluding the number itself (also known as its aliquot sum).
Here you can see an example in below image.
Let’s start writing a Program in C#
Step 1 – Open Visual Studio and create a Console Application using C# with the name PerfectNumberDemo.
Step 2- Navigate to Program.cs file and write the following code into it.
using System;
namespace PerfectNumberDemo
{
class Program
{
static void Main(string[] args)
{
int iNumber, iSum = 0, iTemp;
Console.Write("Enter the Number");
iNumber = int.Parse(Console.ReadLine());
iTemp = iNumber;
for (int iCount = 1; iCount < iNumber; iCount++)
{
if (iNumber % iCount == 0)
{
iSum = iSum + iCount;
}
}
if (iSum == iTemp)
{
Console.WriteLine("\n The Given number is a Perfect Number");
Console.ReadLine();
}
else
{
Console.WriteLine("\n The Given Number is not a Perfect Number");
Console.ReadLine();
}
}
}
}
All done now we need to run the application to see the output of the above program.
You must also follow the following links:
Hope you loved this small session about a simple Program in C#.
Thank You.
It is appropriate time to make some plans for the longer term and it’s time to be happy. I’ve read this submit and if I could I want to recommend you some attention-grabbing issues or suggestions. Maybe you can write next articles referring to this article. I want to learn even more things about it!|
I have read so many articles or reviews about the blogger lovers except this paragraph is genuinely a good paragraph, keep it up.|
alternatives to gabapentin and pregabalin
Hi, this weekend is good in favor of me, because this moment i am reading this great educational article here at my home.|
You’re so cool! I do not suppose I have read through anything like that before. So wonderful to discover another person with a few original thoughts on this subject. Seriously.. thanks for starting this up. This website is one thing that is required on the internet, someone with a bit of originality!|
Hello, i feel that i noticed you visited my blog so i got here to go back the desire?.I’m attempting to to find things to improve my web site!I suppose its adequate to make use of a few of your ideas!!|. Many thanks!|
alprostadil dosage alprostadil pellets
generic vardenafil 20mg india vardenafil, dapoxetine)
tadalafil generic tadalafil pills 20mg
sildenafil 10 mg price of sildenafil citrate
For the reason that the admin of this web page is working, no question very shortly it will be famous, due to its feature contents.|
471476 342536Rattling excellent information can be identified on internet weblog . 941368
I’m usually to running a blog and i actually respect your content. The article has really peaks my interest. I’m going to bookmark your website and keep checking for brand new information.
adult ads
local women dates
Awesome post.| | http://debugonweb.com/2020/01/perfect-number/ | CC-MAIN-2021-25 | refinedweb | 551 | 65.01 |
#include <db_cxx.h> int DbEnv::get_memory_init(DB_MEM_CONFIG struct, u_int32_t *countp);
The
DbEnv::get_memory_init() method returns
the number of objects to allocate and initialize when an
environment is created. The count is returned for a specific named
structure. The count for each structure is set using the
DbEnv::set_memory_init()
method.
The
DbEnv::get_memory_init() method may be
called at any time during the life of the application.
The
DbEnv::get_memory_init()
method either returns a non-zero error value or throws an
exception that encapsulates a non-zero error value on
failure, and returns 0 on success.
The struct parameter identifies the structure for which you want an object count returned. It must be one of the following values:
DB_MEM_LOCK
Initialize locks. A thread uses this structure to lock a page (or record for the QUEUE access method) and hold it to the end of a transactions.
DB_MEM_LOCKOBJECT
Initialize lock objects. For each page (or record) which is locked in the system, a lock object will be allocated.
DB_MEM_LOCKER
Initialize lockers. Each thread which is active in a transactional environment will use a locker structure either for each transaction which is active, or for each non-transactional cursor that is active.
DB_MEM_LOGID
Initialize the log fileid structures. For each database handle which is opened for writing in a transactional environment, a log fileid structure is used.
DB_MEM_TRANSACTION
Initialize transaction structures. Each active transaction uses a transaction structure until it either commits or aborts.
DB_MEM_THREAD
Initialize thread identification structures. If thread tracking is enabled then each active thread will use a structure. Note that since a thread does not signal the BDB library that it will no longer be making calls, unused structures may accumulate until a cleanup is triggered either using a high water mark or by running DbEnv::failchk(). | http://idlebox.net/2011/apidocs/db-5.2.28.zip/api_reference/CXX/envget_memory_init.html | CC-MAIN-2013-48 | refinedweb | 297 | 56.15 |
Sometimes I spend significant time in R or Python trying to do
something which is trivial is bash. This is especially useful when I’m
working with very large files that will take a long time to read
in. Why read in an entire file to get the last line, when I could just
use
tail -n 1? Or if I want the line count, why read it in when
wc
-l will get the job done faster?
It turns out that it’s not too complicated to capture shell output in R or Python. Here’s how I do it.
Python
If you use Python 3, capturing shell output is pretty simple (if
you’re still on Python 2, the tides are turning! It’s time to make the
change!). You can use the
subprocess module to get the output in
bytes, then decode and parse it.
import subprocess ## Get the last line of the file 'fname' last_line = subprocess.check_output("tail -n 1 " + fname, shell = True) ## convert to string and parse ## 'UTF-8' is a common encoding, but you may need to use something else last_line = last_line.decode('UTF-8').strip()
R
R makes this process easy too. You may have used
system() before to
submit shell commands. It turns out that if you set the argument
intern = TRUE, you’ll get the output as a character vector– you
don’t even have to deal with encoding! The output may take some
parsing, but the
stringr package is good for that.
require(stringr) ## Get the last line of the file 'fname' lastLine = system(stringr::str_c("tail -n 1 ", fname), intern = TRUE) ## strip leading/trailing whitespace lastLine = stringr::str_trim(lastLine)
This has saved me from reinventing the wheel many times since I learned it. Hopefully it helps you too! | http://www.lizsander.com/programming/2016/03/31/Capturing-Shell-Output-in-R-and-Python.html | CC-MAIN-2019-09 | refinedweb | 299 | 80.21 |
Chapter 6
Web Services.
Web Services in Practice, shrink-wrapped, of. For sites using the Passport authentication service, it's no longer necessary to memorize or track numerous username/password pairs.
Recently, Microsoft also announced Project HailStorm customer can also control access permission to the data to allow or restrict access to content. These services also allow other users, organizations, and smart devices to communicate and retrieve information about us. For example, how many times have you been on the road with your mobile phone and want to get to your contact list HailStorm Web Services, information will be centrally managed. For example, your mechanic might notify you when it's time for your next major service. Or when you move and change your address, instead of looking up the list of contacts you wish to send the update to, HailStorm will help you publish your update in one action.
The potential for consumer-oriented and business-to-business Web Services like HailStorm is great, although there are serious and well-founded concerns about security and privacy. In one form or another, though, Web Services are here to stay, so let's dive in and see what's underneath.
Web Services Framework
Web Services combine the best of both distributed componentization and the World Wide Web. It extends distributed computing to broader ranges of client applications. The best thing is that it does it by seamlessly marrying and enhancing existing technologies.
Web Services Architecture
Web Services are distributed software components that are accessible through standard web protocols. The first part of that definition is similar to that:
-
- The process of advertising or publishing a piece of software as a service and allowing for the discovery of this service..
Web Services Wire Formats.
HTTP GET and HTTP POST.
SOAP.
Web Services Description (WSDL).
WSDL Structure
The root of any web service description file is the
<definitions>element. Within this element, the following elements provide both the abstract and concrete description of the service:
- Types
- A container for datatype.
- Port Type
- An abstract set of operations supported by one or more endpoints.
- Operation
- An abstract description of an action supported by the service. Each operation specifies the input and output messages defined as
<message>elements.
- Binding
-.
- Service
- A collection of network endpoints--ports. Each of the web service wire formats defined earlier constitutes a port of the service (HTTP GET, HTTP POST, and SOAP ports).
- Port
- A single endpoint defined by associating a binding and a network address. In other words, it describes the protocol and data-format specification to be used as well as the network address of where the web service clients can bind to for the service.( ) will both the HTTP GET and HTTP POST protocols, the binding is
<http:binding>with the verb being GET and POST, respectively. Because the GET and POST verbs are part of the HTTP protocol, there is no need for the extended HTTP header like
soapActionfor,, specified at; and
scl, which points to, where the schema for the service discovery and service contract language is described. The
contractRefelement specifies the URL where yourWebService WSDL can be obtained. Right below that is the
discoveryRefelement, which links to the discovery file on yourBrotherSite web site. This linkage allows for structuring networks of related discovery documents.
Dynamic discovery:
<?xml version="1.0" ?>
<dynamicDiscovery xmlns="urn://schemas-dynamic:disco.2000-03-17">
<exclude path="_vti_cnf" />
<exclude path="_vti_pvt" />
<exclude path="_vti_log" />
<exclude path="_vti_script" />
<exclude path="_vti_txt" />
</dynamicDiscovery>
Discovery setting in practice
excludeargument to XML nodes to exclude their directories from the dynamic discovery document.
UDDI.
The System.Web.Services Namespace
Now that we have run through the basic framework of Microsoft .NET Web Services, let us take a look inside what the .NET SDK provides us in the System.Web.Services namespace.
There are only a handful of classes in the System.Web.Services namespace:
- WebService
- The base class for all web services.
- WebServiceAttribute
- An attribute that can be associated with a Web Service-derived class.
- WebMethodAttribute
- An attribute that can be associated with public methods within a Web Service-derived class.
- WebServicesConfiguration
- Information needed for the Web Service runtime.
- WebServicesConfigurationSectionHandler
- Information needed for the Web Service runtime.
The two most important classes in the System.Web.Services namespace this book, we do not discuss helper classes dealing with the runtime of web services.
Web Services Provider.
Web Service Provider Example
We will be building a web service called PubsWS to let consumers get information from the sample Pubs database. All data access will be done through ADO.NET, so make sure you've.[3] It is highly recommended that you specify a namespace for your web service before publishing it publicly because the default namespace,, will not uniquely identify your web service from other web services. To do this, all you have to do is.;uid=sa;pwd=;";
:
Public Function <WebMethod( )>:
- intend to use session state for the web method, you might want to disable this flag so that the web server does not have to generate and manage session IDs for each user accessing this web method.
If you set up your web services from scratch, you might also need to provide the configuration file (web.config) in the same directory as your asmx file. This configuration file allows you to control various application settings about the virtual directory. The only thing we recommend definitively is to set the authentication mode to
Noneto make our web services development and testing a little easier. When you release your web services to the public, you would probably (), it will give you a list of supported methods. To find out more about these methods, click one of them. This brings up a default web service consumer. This consumer, autogenerated through the use of reflection, is great for testing your web services' methods.[4]sd:schema id="NewDataSet"
targetNamespace="" xmlns=""
xmlns:xsd=""
xmlns:
<xsd:element
<xsd:complexType>
<xsd:choice
<xsd:element
<xsd:complexType>
<xsd:sequence>
<xsd:element name="au_id"
msdata:
<xsd:element name="au_lname"
msdata:
<xsd:element name="au_fname"
msdata:
<xsd:element name="phone"
msdata:
<xsd:element name="address"
msdata:
<xsd:element name="city"
msdata:
<xsd:element name="state"
msdata:
<xsd:element name="zip"
msdata:
<xsd:element name="contract"
msdata:
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:choice>
</xsd:complexType>
</xsd:element>
</xsd:schema>
<NewDataSet xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"
xmlns:
<updg:sync>
<msdata:unchanged>
<SelectedAuthor updg:
>
</msdata:unchanged>
</updg:sync>
</NewDataSet>
</DataSet>
HTTP POST Consumer
In the section "HTTP GET Consumer," we saw the automatic creation of a web services consumer just by hitting the URL of the web services,. It is now time for us to see how a web client can use HTTP POST and SOAP to access a web service. This time around, we are going write a C# web service consumer.
The Microsoft .NET SDK comes with a rich set of tools to simplify the process of creating or consuming web services. We are going to use one of these tools,
wsdl, to generate source code for the proxies to the actual web services:[5]
wsdl /l:CS /protocol:HttpPost
This command line creates a proxy for the PubsWS web service from the WSDL (Web Services Description Language) document obtained from the URL. The proxy uses HTTP POST as its protocol to talk to the web service. If you look at this generated C# file, you will see that it contains a proxy class PubsWS that derives from HttpPostClientProtocol class. If you use the
/protocol:HttpGetor
/protocol:SOAPparameters, then the PubsWS derives from either the HttpGetClientProtocol or SoapHttpClientProtocol class.
After generating the C# source file PubsWS.cs, we are facedBooks web method to get a DataSet as the result. The remaining lines bind the default view of the table
Book SOAP and get the data set. */
DataSet oDS = oProxy.GetBooks( );
/* Create a data grid and connect it to the data set. */
DataGrid dg = new DataGrid( );
dg.Size = new Size(490, 270);
dg.DataSource = oDS.Tables["Books"] SOAP call, and displays a data grid containing that dataset. Figure 6-4 shows the output of the C# client after obtaining the data from the PubsWS web service via SOAP protocol.
Here is an excerpt from the VB web service client, TestProxy.vb :
Dim oProxy as PubsWS = New PubsWS( )
Dim oDS as DataSet = oProxy.GetBooks( )
DataGrid1.DataSource = oDS.Tables("Books").DefaultView
You can compile the VB web service client with this command (type the entire command on one line):
vbc TestProxy.vb
/r:System.Drawing.dll
/r:System.Windows.Forms.dll
/r:System.Data.dll
/r:PubsWS.dll
/r:System.Web.Services.dll
/r:System.dll
/r:System.xml.dllBB, we pass the parameters to the server in the body of the message. Whereas for the PubsWS.asmx. Instead of keeping the default setting, which leaves this file accessible to all anonymous users, we change this setting to
Basic
Authentication. After this change, only users that pass the authentication can make use of the web service.
For real-life situations, of course, we are not just going to use the Basic Authentication method because it sends the username and password in clear text through the HTTP channel. We would choose other methods, such as Secure Sockets Layer (SSL) underneath Basic Authentication, so that the data passed back and forth is secure. Available methods include:
- T/F
Withdraw(access token, account number, amount, balance) returns T/F.
Summary
In this chapter, we've introduced you to the new paradigm of application-. Not only do the Web Services in your system not have to be implemented in the same language, they don't even have to be on the same platform. Because of this greater interoperability, Web Services are very suitable for business-to-business (B2B) integration.
1. Current Microsoft .NET SOAP implementation runs on top of HTTP.
2. If you use Visual Studio.NET to create your web service, the discovery file is created automatically.
3. You will have to get to the Request and Response objects through the Context property of the WebService class.
4. A simply Reflection example can be found in the section .
5. wsdl.exe generates the proxy source code similar to the way IDL compilers generate source files for DCOM proxies. The only difference is that WSDL is the language that describes the interface of the software component, which is XML-based.
Back to: .NET Framework Essentials
© 2001, O'Reilly & Associates, Inc.
webmaster@oreilly.com | http://oreilly.com/catalog/dotnetfrmess/chapter/ch06.html | crawl-002 | refinedweb | 1,746 | 54.73 |
Ticket #9042 (closed defect: fixed)
OS/2 guest crashes on floating point exception => fixed in svn
Description (last modified by michaln) (diff)
The following code snippet (which is supposed to die by SIGFPE) causes an OS/2 guest to trap. Kernel is 14.104_W4.
#include <stdio.h> #include <float.h> int main(void) { double d1 = 1.0; double d2 = 0.0; _control87(0, 0x1f); printf("%lf\n", d1 / d2); return 0; }
Attachments
Change History
Changed 5 years ago by rudiIhle
- attachment eCS 2.0-2011-06-08-09-58-29.log
added
Log file
comment:2 Changed 5 years ago by dbsoft
This appears to be the bug I have been experiencing on my Macs. They have Intel CPUs, my AMD based Windows 7 64bit systems don't experience this problem and it results in SIGFPE.
comment:3 Changed 5 years ago by bird
I have not been able to reproduce this with the current trunk version of VirtualBox, testing on both intel i7 core and amd phenom 2.
comment:4 Changed 5 years ago by dbsoft
It happens on both my Macs... MacBook Pro with an Intel Core 2 Duo and MacPro with an Intel Quad Core Xeon.
It does not happen on my PCs with AMD Phenom 2 and X2.
comment:5 Changed 5 years ago by rudiIhle
With 4.1.4r74291 it happens here. See attached screenshot...
Changed 5 years ago by rudiIhle
- attachment trapscreen.png
added
Trap screen after executing the program snippet.
Changed 5 years ago by dbsoft
- attachment VBox-4.1.6-r74713-eCS2-Mac.png
added
Trap screen with eCS 2.0 on MacBook Pro w/ Intel Core 2 Duo
comment:6 follow-up: ↓ 7 Changed 5 years ago by Erdmann
I can add that this very same trap also happens with more complex OS/2 applications, namely Firefox 8.x and Seamonkey 2.3. But I guess that can be expected as the error is so fundamental.
comment:7 in reply to: ↑ 6 Changed 5 years ago by Erdmann
I can add that this very same trap also happens with more complex OS/2 applications, namely Firefox 8.x and Seamonkey 2.3. But I guess that can be expected as the error is so fundamental.
Forgot to add: I am using VirtualBox Version 4.1.6.
Changed 5 years ago by Erdmann
- attachment eComStationV2-2011-11-27-17-13-25.log
added
Yet another log: Windows 7 host, Intel Core2 Duo CPU, 2 GB RAM
comment:9 Changed 5 years ago by michaln
Reproducible here on an Intel Core 2 Quad host. No problem on an AMD system. I wonder if this is specific to the crummy old VT-x implementation.
comment:10 Changed 5 years ago by erdmann
I still experience this crash, funny enough it now occurs less often since I upgraded to Seamonkey 2.5 from Seamonkey 2.3.3. Maybe Seamonkey 2.5 uses less floating point. Unfortunately I don't know the technical details behind VT-x (I could have a look into the Intel manual but I am sure I am lacking years behind ...). I am using Intel Core2 Duo with Windows 7 as host. I had another trap on bootup just now, unfortunately I forgot to take a photo. The only thing I can say is that it happens pretty randomly. If you want me to test anything ...
Changed 5 years ago by erdmann
- attachment erdmann.png
added
Sudden trap on using Seamonkey 2.5
comment:11 Changed 5 years ago by erdmann
I got a trap using Seamonkey. I attached trap screen. With the very same kernel the trap address has now changed.
comment:12 Changed 4 years ago by michaln
There's one crucial piece of information missing here. This problem does not occur with the SMP OS/2 kernel. The reason being that the SMP kernel runs with the CR0.NE bit set. The actual number of processors in the guest does not matter.
comment:13 Changed 4 years ago by lerdmann
For information: the traps still occur with VirtualBox 4.1.10.
CR0.NE bit set implies that there is some old interrupt controller around that generates IRQ 13 on a floating point exception, correct ?
Ok, I will now install the SMP kernel in virtual box and see if the problem goes away. Maybe that's also the reason why traps occur so frequently when Seamonkey is in use. I have the impression Seamonkey creates a lot of floating point exceptions that are then handled internally by the application. But the underlying mechanisms have to work ...
comment:14 Changed 4 years ago by michaln
Sure, it's the same with 4.1.10. No one said anything changed.
It's the other way around with CR0.NE. When it's set, it means the "new style" (implemented since the 286) math error handling should be used, i.e. #MF exception. That's also the only way a SMP system can work. The OS/2 UNI kernels use the ancient FERR/IRQ13/IGNNE math error handling (CR0.NE clear) which clearly doesn't work right in VirtualBox. No modern OS uses that, Windows 9x was the only other important OS which used the old style math error handling. Besides DOS, of course.
comment:15 Changed 4 years ago by lerdmann
Sorry, yes, I meant it the other way around for CR0.NE.
In any case, I have just upgraded to the SMP kernel and I am using Seamonkey 2.5 under an OS/2 guest in VirtualBox. Should traps occur again, I will post them here.
By the way: when you say "OS/2 UNI kernels" do you mean only the "W4" kernel or also the "UNI" kernel ? I never really understood why there is yet a third kernel variant (UNI) besides the other 2 (W4, SMP).
Changed 4 years ago by lerdmann
- attachment boottrap.PNG
added
Trap on boot with SMP kernel (one CPU, no PSD loaded)
comment:16 Changed 4 years ago by lerdmann
I had a trap on bootup: SMP kernel, one CPU only, no I/O APIC VM emulation, no PSD loaded. As I can tell from the trap screen, the CR0.NE bit is NOT set even though it's an SMP kernel. Does that mean that I either need a PSD to operate or that the OS needs its time to change from CR0.NE = 0 to CR0.NE = 1 ? How do you handle Win 9x and DOS guests ? As far as I understand they also set CR0.NE = 0.
comment:17 Changed 4 years ago by michaln
I believe a PSD is required. It's also true that early in the boot, CR0.NE is not set. I can't say exactly when it does get set. See INIT_USE_FPERR_TRAP in SMP.INF.
Win9x and DOS guests are handled the same as OS/2. It would appear that running with FP exceptions unmasked is extremely rare on those guests.
comment:18 Changed 4 years ago by lerdmann
No, you can run an SMP kernel without a PSD. But of course, you will only get to use the BSP and none of the ASPs. But it surely looks that running the SMP kernel with a PSD (OS2APIC.PSDD or ACPI.PSD) is preventing the traps. I guess that OS2APIC.PSD will set CR0.NE very early in the boot process. For ACPI.PSD I could ask the developer and find out if it also explicitely sets CR0.NE (sets flag INIT_USE_FPERR_TRAP).
I have now readded OS2APIC.PSD to config.sys and enabled "I/O APIC" in the VM configuration. At the same time I have only enabled one CPU core (of two CPU cores available) in the VM configuration because mouse tends to get jerky with > 1 CPU core. I will see if this eliminates the traps on the long term.
comment:19 Changed 4 years ago by michaln
There's a Windows test build at
It would be nice if someone could try it and check if the guest OS traps are gone. Please note that this is a development build and I'm not interested in anything other than whether the traps are gone or not.
I should also note that the issue does NOT affect AMD CPUs.
comment:20 Changed 4 years ago by rudiIhle
Hmm, installed it over a 4.1.8 and it would not start due to "driver structure changed" or so. Had some trouble getting back to a working setup (now at 4.1.10), so I'm not too enthusiastic to try again.
comment:21 Changed 4 years ago by frank
The problem is simply that you still have the 4.1.10 Extension Pack installed. If you don't need USB2 for that VM, just disable USB2 in the VM settings, otherwise we could provide you a test build of the 4.1.51 Extension Pack. Do you need one?
comment:22 Changed 4 years ago by dbsoft
All of my Windows machines have AMD, is there a Mac testbuild?
comment:23 Changed 4 years ago by rudiIhle
I think it would be good to have the 4.1.51 extension pack as well.
comment:24 Changed 4 years ago by frank
comment:25 Changed 4 years ago by rudiIhle
O.K., first of all, I don't get the trap in the guest anymore. However, there still seems to be something not quite right. When running the test program above on a freshly booted up guest I get a SIGFPE (as expected). But the location appears to be somewhere in the runtime lib instead of in the program code itself. Also, when running the program three or more times in a row, no SIGFPE is thrown anymore. Instead it simply continues printing out the unmodified value of "d1" (i.e. 1.00000).
comment:26 Changed 4 years ago by dbsoft
Similar here on both Macs...
10:56:00a nuke@ECS-[C:\HOME\DEFAULT]test
Killed by SIGFPE pid=0x0041 ppid=0x0040 tid=0x0001 slot=0x007f pri=0x0200 mc=0x0001 C:\HOME\DEFAULT\TEST.EXE LIBC063 0:0009a244 cs:eip=005b:1f39a244 ss:esp=0053:0212dddc ebp=0212de48
ds=0053 es=0053 fs=150b gs=0000 efl=00012202
eax=00000066 ebx=0212ff7c ecx=0212ff74 edx=0212df10 edi=00010032 esi=00000066 Process dumping was disabled, use DUMPPROC / PROCDUMP to enable it.
10:56:01a nuke@ECS-[C:\HOME\DEFAULT]test 1.000000
10:57:14a nuke@ECS-[C:\HOME\DEFAULT]test 1.000000
comment:27 Changed 4 years ago by michaln
Yes, the exception may not be reported in the correct place. Is the behavior on AMDs any different?
comment:28 Changed 4 years ago by dbsoft
Michal you are correct the behavior does seem to be the same on AMD... although it seems unexpected to me on both.
11:27:01a nuke@ECS-[C:\HOME\DEFAULT]test
Killed by SIGFPE pid=0x0046 ppid=0x0040 tid=0x0001 slot=0x007f pri=0x0200 mc=0x0001 C:\HOME\DEFAULT\TEST.EXE LIBC064 0:00083123 cs:eip=005b:1f373123 ss:esp=0053:0212ddf0 ebp=0212de48
ds=0053 es=0053 fs=150b gs=0000 efl=00012286
eax=0212df10 ebx=00000004 ecx=ffffffff edx=80000000 edi=00000000 esi=00000066 Process dumping was disabled, use DUMPPROC / PROCDUMP to enable it.
11:27:02a nuke@ECS-[C:\HOME\DEFAULT]test 1.000000
comment:29 Changed 4 years ago by lerdmann
I am not sure if this is related, if not, just ignore:
I am running the rusty old 16-bit Microsoft C compiler for OS/2. It's CL.exe with its subcomponents C1.exe (preprocessor(?)), C2.exe (tokenizer(?), optimizer(!)), C3.exe (output generator(?)). There also exists large memory model variants C1L.exe,C2L.exe,C3L.exe that can deal with large source files. I would have to use C2L.exe because I am using /Oe /Og (global optimizations) with rather large source files which require it (otherwise I get a warning that global optimizations cannot be performed for this and that routine).
I therefore specify /B2c2l.exe either on commandline or via CL env. var.
When I run the compiler with /B2... on a W4 kernel within VirtualBox, it just works but then I occasionally have these general trap problems.
When I run the compiler with /B2... on an SMP kernel within VirtualBox, I get "varying" results. I never get a trap but on the first run I might get a "C1001" compiler error, whereas on subsequent runs I will get a "Command line error D2030: INTERNAL COMPILER ERROR in 'P2'". But the internal compiler error might also occur on the very first run.
This is true for a source file of any size, small or big.
Unfortunately I don't have a native OS/2 on a multi-core system to test the SMP kernel on.
I would be grateful if anybody could test this behaviour on a multi-core system with SMP kernel on a native OS/2 installation and compare with behaviour in VirtualBox.
comment:30 Changed 4 years ago by lerdmann
As to probs with C2L.EXE: I have to correct my statement. It keeps trapping but the trap address is pretty much random. Even though I compile the very same file with the very same command line switches. See attached POPUPLOG.OS2. My gut feeling for this error is that it depends on how many segments the (segmented) executable contains. The more, the worse.
Changed 4 years ago by lerdmann
- attachment POPUPLOG.OS2
added
Traps in C2L.EXE (part of 16-bit Microsoft C Compiler)
comment:31 Changed 4 years ago by lerdmann
Some news:
at some point in time Scott Garfinkle from IBM modified the W4 kernel to also support using a PSD along with it so I took the chance:
1) if I use a W4 kernel with OS2APIC.PSD (and only 1 CPU of course), it looks like it gets rid of the traps and C2L.exe starts working again. I will need more observation time and report back
2) using a W4 kernel without any PSD leads to the random traps
Here is what I found in the eComStation bug tracker about probs running Firefox with an OS/2 guest in VirtualBox. It explains why the W4 kernel is kind of "flaky":[[BR]]
[The kernel trap is caused by a defect in the Warp4 kernels. The firefox code issues a fldcw which generates a math fault (#MF) exception which does not push an exception specific error code on to the stack. The kernel code should push a dummy error code on to the stack before entering the common exception handler code, but it does not. The common codes assumes that the EFLAGS are at a specific stack offset and checks the EFLAGS VM bit to determine if the trap occurred in V86 mode. If the bit happens to be set, the result is a trap in V86FaultEntry + 17. If the bit is not set, the kernel will trap or hang somewhat later because the stack contains on less dword than the code expects.
The defect has been fixed in the SMP kernel, so running the SMP kernel in VirtualBox is a possible workaround.
It is not known why the fldcw generates a #MF exception. This might be a VirtualBox defect. ]
Changed 4 years ago by lerdmann
- attachment shutdownW4trap.PNG
added
Trap on shutdown with W4 kernel and OS2APIC.PSD
comment:32 Changed 4 years ago by lerdmann
I had a trap on shutdown. W4 kernel with OS2APIC.PSD. The trap screen says that CR0.NE bit was set. So this cannot be the only reason for trapping.
comment:33 Changed 4 years ago by michaln
Well, duh. For example, the reason could be that you're running the W4 kernel with a PSD, which I'm sure is an almost completely untested combination.
comment:34 Changed 4 years ago by lerdmann
ok,
1) POPUPLOG.OS2 was taken with the SMP kernel 10.104a and OS2APIC.PSD in place. I was using only one CPU.
2) I was invoking cl.exe 3 times with the very same parameters and the very same source file
3) the traps however occured at 3 different places in C2L.exe.
The only reason I was mentioning the W4 kernel is to state that C2L.exe does not trap when I use the W4 kernel in conjunction with OS2APIC.PSD.
comment:35 Changed 4 years ago by michaln
- Status changed from new to closed
- Resolution set to fixed
- Summary changed from OS/2 guest crashes on floating point exception to OS/2 guest crashes on floating point exception => fixed in svn
This ticket has clearly outlived its usefulness. We really don't care about 20+ year old Microsoft compilers which have known problems running on modern systems.
The reported problem is now fixed and the fix should be included in the next VirtualBox release. The OS/2 kernel should no longer crash on Intel CPUs because it should never get a #MF exception anymore (unless it asked for it).
comment:36 Changed 4 years ago by rudiIhle
Michal,
does the fix also address the inconsistent behavior when running the test case program multiple times and the reporting of the exception in the correct place ?
comment:37 Changed 4 years ago by michaln
No, it doesn't. That's a completely different problem, which was visible on AMD CPUs since day one. Feel free to open a separate ticket, just don't expect it to be fixed anytime soon without giving some really good reason why we should spend time on that (it actually needs quite a bit of work).
comment:38 Changed 4 years ago by lerdmann
Rudi,
would you create a new bug ? Unfortunately, Michal does not consider the OS/2 Microsoft C-Compiler 6.0 a valid test case. I am sure that once the "inconsistent behaviour" is fixed that then the OS/2 Microsoft C-Compiler 6.0 will happily exhibit consistent behaviour (trapping or not) provided the same command line switches and the same input source file is used.
In any case, thanks for fixing this bug's problem.
comment:39 Changed 4 years ago by rudiIhle
Lars,
I'm not convinced that the problems you are describing are really related to this issue. To summarize: We had a trap in the kernel due to VirtualBox was delivering #MF which is neither expected nor properly handled by the W4-Kernel. To my understanding this has been fixed. Now we see two different problems:
1.) the SIGFPE fires only once or twice per guest session 2.) the reported exception location is wrong
I cannot tell if these two issues have a common cause (maybe a bug in the DOS-like FPE emulation) or if these are two separate things. I also don't know, if the location reporting is broken in general (i.e. not only for SIGFPE). Maybe Michal can tell and depending on this I might open one or two new ticket(s). Given the time it took until this one was addressed, expecting it to be fixed "anytime soon" is probably not realistic anyway...
comment:40 Changed 4 years ago by michaln
The two issues probably have a common cause. They are also specific to floating-point exceptions because a) the delivery is very different, and b) the FPU has a whole own internal state that's different from the CPU.
If you have some paying customer who depends on accurate FPU exception reporting in OS/2 guests, that would greatly accelerate the process. But I suspect there's no such customer because very few applications even run with FP exceptions unmasked. I'm sure you understand that we have better things to do. Of course if someone wants to spent a fun few weeks with VirtualBox and submit a patch, we won't object :)
I highly doubt the problems with MS C 6.0 are related at all. MS C 6.0 is well known to have all sorts of problems running on modern systems, both Windows and OS/2. If you still depend on MS C 6.0 in 2012, you have no one but yourself to blame. So far I've seen no evidence that the MS C 6.0 compiler even uses the FPU at all (it might, but I wouldn't count on that).
comment:41 Changed 3 years ago by dbsoft
- Status changed from closed to reopened
- Resolution fixed deleted
In version 4.3.2 this issue has resurfaced and now affects both AMD and Intel processors.
comment:42 Changed 3 years ago by frank
Thanks for the report. We reproduced the bug and working an a fix.
comment:43 Changed 2 years ago by dbsoft
Has any progress been made in the last 6 months?
comment:44 follow-up: ↓ 45 Changed 23 months ago by frank
There is a chance that this problem was fixed in VBox 4.3.16. Could you test?
comment:45 in reply to: ↑ 44 Changed 23 months ago by dbsoft
Just tested on my Mac with 4.3.16... still traps on the floating point exception.
comment:46 Changed 23 months ago by klaus
dbsoft, you never said what causes problems for you. A screenshot is not enough to debug whatever problem you might have.
comment:47 Changed 23 months ago by lerdmann
I still get a trap with the program snippet Rüdiger provided. See attached screenshot.
Changed 23 months ago by lerdmann
- attachment newTrapScreen.PNG
added
comment:48 Changed 23 months ago by frank
lerdmann, would you be willing to test a new fix? Which VirtualBox package do you need, Windows host?
comment:49 Changed 23 months ago by lerdmann
Sure, I'd like to test. I am using Windows 7 Professional as host. If it is not too much hassle I'd also like to have a matching extension pack.
comment:50 Changed 23 months ago by lerdmann
1) I forgot to mention: I am using an 8-core AMD CPU
2) Don't know if this has a bearing on the problem: see the 2. half of comment 31.
On the other hand I understood from Michals comments that with the existing fix the CPU should no longer get a #MF exception at all any more.
But 2. half of comment 31 would explain why the W4 kernel traps on a #MF exception while the SMP kernel does not and it would turn out to be a W4 kernel bug that cannot be fixed in VirtualBox.
comment:51 follow-up: ↓ 52 Changed 23 months ago by frank
Changed 23 months ago by dbsoft
- attachment eComStation 2.png
added
Trap with 4.3.17 test version
comment:52 in reply to: ↑ 51 Changed 23 months ago by dbsoft
I just tested on Windows 8.1 x64 on an AMD FX-6300 with the 4.3.17 build and it still traps on that code snippet.
Attached my trap screen... it is a TRAP 000e instead of 0008 that lerdmann got.
(And I just doubled checked... using 4.3.16 on my Mac I also get TRAP 0008 like lerdmann)
comment:53 follow-up: ↓ 54 Changed 23 months ago by lerdmann
Yes, that fixes it for me.
The funny thing is, for the W4 kernel the program snippet reports 0x2003e to be the program trap address in the program whereas with the very same program the SMP kernel reports 0x2003b to be the program trap address. For both kernels, the program trap address is consistent across multiple invocations of the program.
Whatever, I consider this problem fixed at least for my AMD CPU.
Just for fun, I loaded a self written PSD in conjunction with the W4 kernel that enables the new way of floating point exception reporting (#MF exception) and disables IRQ13. Under this scenario I get the kernel trap as already shown in "newTrapScreen.PNG".
@dbsoft: you should make sure that you run the W4 kernel WITHOUT ANY PSD as it is supposed to be for the W4 kernel.
Thanks a lot !
comment:54 in reply to: ↑ 53 Changed 23 months ago by dbsoft
comment:55 follow-up: ↓ 56 Changed 22 months ago by michaln
@lerdmann: The program trap address is probably the address of the instruction where the FP exception was detected, but not the address of the actual FP instruction which triggered the exception. I don't know why the reported address is different, but it should not cause problems.
The thing with the PSD is very interesting and yes, it basically exactly simulates the VirtualBox bug (FP exceptions are delivered as #MF and not IRQ13) which then triggers a bug/unexpected code path in the W4 kernel.
It would be nice if someone could test on an Intel machine, too.
comment:56 in reply to: ↑ 55 Changed 22 months ago by dbsoft
It would be nice if someone could test on an Intel machine, too.
I can test on Intel... can boot my Mac in Windows or test a Mac build if one is available... but I am experiencing a trap still with the test program... a different one though. So not sure how valuable my test will be.
So I booted Windows, installed the same VirtualBox 4.3.17 and using the same image I no longer get the trap. Seems to be fixed on my Intel Mac in Windows 7... not sure if my AMD PC has something configured differently but I am still getting the trap there with the same software.
I also now tested on an older AMD Athlon 64 X2 running Windows 7 and I also get the trap 000e... seems to not be fixed on AMD for me. I also tested on an older Core 2 Duo Mac running Windows 7 which also seems to be fixed.
So my testing shows Intel is fixed, AMD is still bugged but with trap 000e now instead of 0008.
comment:57 Changed 22 months ago by dbsoft
Can anyone else test with various AMD systems to see if my results are accurate or if something is going on weird with my systems?
Changed 22 months ago by lerdmann
- attachment os2pcat.zip
added
PSD for Warp4 kernel that works around bugs in the #MF handler of Warp4 kernel
comment:58 follow-up: ↓ 59 Changed 22 months ago by lerdmann
@dbsoft: find attached file "os2pcat.zip". It contains a PSD (and all the source code) that fixes the existing problem in the Warp 4 kernel. Unzip OS2PCAT.PSD and OS2PCAT.SYM and place them into \os2\boot directory. Add this line to config.sys: PSD=OS2PCAT.PSD
That should fix your problem. It's no use fixing something in VirtualBox where in fact the Warp4 kernel is causing all the problems.
Add. info: with this PSD loaded in conjunction with the W4 kernel, the failing address (of the program snippet) displayed is exactly the same as for the SMP kernel.
comment:59 in reply to: ↑ 58 Changed 22 months ago by dbsoft
That should fix your problem. It's no use fixing something in VirtualBox where in fact the Warp4 kernel is causing all the problems.
That does seem to fix the trap on my AMD systems... however it isn't clear to me why the behavior is different on Intel and AMD?
comment:60 Changed 22 months ago by lerdmann
I am not the vitualization expert but there must be some difference between AMD and Intel. The point is that the PSD works around the bug in the W 4 kernel and it correctly handles what VirtualBox does on an unmasked floating point exception.
comment:61 Changed 22 months ago by lerdmann
About virtualization, here is a relevant excerpt from the Intel documentation (volume 3, chapter 23.8) and I suppose that AMD followed closely:
The first processors to support VMX operation require that the following bits be 1 in VMX operation: CR0.PE, CR0.NE, CR0.PG, and CR4.VMXE.
The necessity to set the CR0.NE bit translates to the generation of an #MF exception for floating point exceptions instead of taking the route via an external interrupt controller issuing a IRQ13 interrupt.
I would believe that your AMD CPU is an earlier model that requires CR0.NE to be set in order to properly operate in a virtualized environment. As a consequence the Warp4 kernel has to properly deal with the #MF exception which is what OS2PCAT.PSD ensures.
Later CPUs might offer additional capabilities where setting CR0.NE bit is not necessary, I don't know.
comment:62 follow-up: ↓ 63 Changed 22 months ago by michaln
The requirement to run with CR0.NE set (when using virtualization) applies to all Intel processors. The legacy FPU exception handling does not scale beyond a single CPU, which is why even OS/2 SMP kernels can't use it.
Intel probably plans to completely remove the old-style FPU exception handling in the future since it's not usable for any modern OS (where "modern" includes anything better than DOS, Windows 9x, and OS/2 W4-style kernels).
Anyway, if the PSD is necessary, it's a bug in VirtualBox (which we can't reproduce). Then again, the PSD isn't a bad solution and might actually make things slightly faster because FPU exceptions don't need to be intercepted.
comment:63 in reply to: ↑ 62 Changed 22 months ago by dbsoft
Anyway, if the PSD is necessary, it's a bug in VirtualBox (which we can't reproduce). Then again, the PSD isn't a bad solution and might actually make things slightly faster because FPU exceptions don't need to be intercepted.
That is kind of what I was thinking too since it works fine on Intel... the one processor I tried on is quite old but the newer one I purchased just a few months ago... it is Piledriver based which I think is the current series originally released at the end of 2012. So I don't think it is a problem with it being an old CPU.
Changed 22 months ago by dbsoft
- attachment amdfx6300.png
added
comment:64 Changed 22 months ago by lerdmann
What was I talking ...
I had successfully tested 4.3.17 (with the W4 kernel and without any PSD) with an Intel dual-Core CPU and NOT with an AMD CPU. Combining with dbsoft's comments it looks like the problem is fixed for Intel CPUs but apparently not for AMD CPUs.
Sorry for the confusion.
comment:65 follow-up: ↓ 66 Changed 21 months ago by frank
comment:66 in reply to: ↑ 65 Changed 21 months ago by dbsoft
Here is another Windows test build which contains a fix for AMD hosts. And here is the extpack.
Initial testing seems to show it works, I only tested on the new processor and just commented out the PSD line in the CONFIG.SYS to remove the OS2PCAT mentioned above. I'll test some more to verify the PSD is actually not loading and that it works on the other processor later today. Thanks!
Tested with an image that I did not install the PSD in and also on the older AMD system and both work correctly! Thanks looks like it is fixed for AMD now too.
comment:67 Changed 21 months ago by michaln
Getting more confirmation would be excellent. FYI, the latest fix applies to all AMD hosts (everything using AMD-V to be exact). No impact on Intels.
comment:68 Changed 21 months ago by lerdmann
As could be expected, I can confirm that it still works on Intel.
comment:69 Changed 20 months ago by frank
Could you recheck with VBox 4.3.20?
comment:70 Changed 20 months ago by dbsoft
Tested on my main Mac (Intel) and PC (AMD) and both worked correctly.
Thank you!
comment:71 Changed 20 months ago by frank
- Status changed from reopened to closed
- Resolution set to fixed
Thanks for the feedback! I will close this ticket.
A VBox.log file is missing. It will show us which configuration your VM has and which processor features of your host are used. | https://www.virtualbox.org/ticket/9042?cversion=1&cnum_hist=15 | CC-MAIN-2016-30 | refinedweb | 5,376 | 73.98 |
Some people like to write down any event that occurs like their birthday or parties. They record the event from the recording device and save it everywhere so it can easily get lost. This little software just helps to organize the event and organize the video.
Windows Presentation Foundation (WPF) is a new technology from Microsoft that allows the developer to manipulate the user interface more easily. The developer can act as a designer to build a user interface in a more interactive manner.
Until today, WPF does not have a property or method that handles stream data like
MediaElement.
MediaElement uses property
Source to grab the media or streaming media; it cannot grab from
byte[] or
Stream. It is not nice since it can come from many difference sources.
There are three files that need to be downloaded:
The source code of Personal Diary WPF is the main application. It can be opened using Visual C# 2008 Express Edition.
The source code of ASP.NET as video stream is the ASP.NET that can only produce the media streaming. For now I only produce Windows Video (*.wmv) and MPEG video (*.mpg). It can be opened using Visual Web Developer 2008 Express Edition.
An SQL file can be opened using SQL Server Management Studio Express. My database name is persdiary. You can change it and don't forget to change the connection string in the app.config. Run the SQL so that the tables are ready to use.
Open both projects - WPF project and Web site. Since not all windows installation equips with IIS, I use a built-in web server that comes with the product. Run the website first to make sure the Web server up and running.
After the website runs and the browser shows the result, copy the address URL from the browser to the value in the appSetting and find the key of hostpath.
Since MediaElement control with
Source property from WPF cannot receive
byte[] or
Stream, we can manipulate it using ASP.NET as a video stream. First ASP.NET will load the data from the database. The video itself saves in the
varbinary data types in the
mstvideo table.
if (Request.QueryString.Count > 0) { string videoid = Request.QueryString["vid"]; if (!string.IsNullOrEmpty(videoid)) { bool loadFull = false; string loadFullStr = Request.QueryString["loadfull"]; if (!string.IsNullOrEmpty(loadFullStr)) loadFull = Convert.ToBoolean(loadFullStr); else { Response.Write("<h1>Full Load or not?</h1>"); } byte[] result = ReadVideo(videoid, loadFull); WriteVideoToPage(result); } else { Response.Write("<h1>Need Video ID</h1>"); } } else { Response.Write("<h1>Need Video ID</h1>"); }
This code will detect the query string and call the method to query data from the database and write the result to this page. There are only two query strings:
vid and
loadfull. Query string
vid is a video id that is useful to load video data by video id from the table. Query string
loadfull is a flag not to load the video fully; in this case I will load 1/8th of total bytes received from the table.
using (SqlConnection connection = new SqlConnection(ConnectionString)) { connection.Open(); SqlCommand command = connection.CreateCommand(); command.CommandType = CommandType.Text; StringBuilder sb = new StringBuilder(); sb.Append("SELECT "); sb.Append("DATALENGTH(vdcontent) AS vdcontent_length, "); sb.Append(" vdcontent,vdformat "); sb.Append(" FROM mstvideo "); sb.Append(" WHERE vdid=@vdid "); command.CommandText = sb.ToString(); command.Parameters.Add("@vdid", SqlDbType.Char).Value = videoid; using (SqlDataReader reader = command.ExecuteReader()) int startIdx = 0; long retval = 0; if (!reader.HasRows) { Response.Write("<h1> Don't have rows ! </h1>"); } while (reader.Read()) { if (string.Compare(reader.GetString(reader.GetOrdinal ("vdformat")),".wmv",true)==0) Response.ContentType = "video/x-ms-wmv"; else if (string.Compare(reader.GetString (reader.GetOrdinal("vdformat")),".mpg",true)==0) Response.ContentType = "video/mpeg"; int buffersize = reader.GetInt32(reader.GetOrdinal("vdcontent_length")); if (!loadFull) buffersize /= 8; movieContainer = new byte[buffersize]; retval =reader.GetBytes(reader.GetOrdinal("vdcontent"), startIdx, movieContainer, 0, buffersize); } } }
The code above queries the video from the table. The select statement
DATALENGTH(vdcontent) measures the length of data that is stored in
vdcontent. I use
command.ExecuteReader() to execute the query. The result is the length of the video, the video itself and the video format. The video format is useful to choose the MIME type of the ASP.NET. For now, I only support WMV and MPG. Again if loadfull is
false, then I set the buffersize smaller; in this case I only divide the size by 8.
Response.BufferOutput = true; //Response.BinaryWrite(movieContents); BinaryWriter binWriter = new BinaryWriter(Response.OutputStream); binWriter.Write(videoData); binWriter.Flush();
ASP.NET will render the video by writing the contents using
System.IO.BinaryWriter. The ASPX file itself is empty.
RegistryKey regKey = Registry.CurrentUser; regKey = regKey.OpenSubKey("Software"); regKey = regKey.OpenSubKey("Nicko"); if (regKey != null) { result = regKey.GetValue("ConnectionString").ToString(); }
I use the registry key to get the connection string. So you can change the connection string once in the WPF app.config.
This is the main application that is written in WPF. The application can add a diary event and the video. You can click the video in the list below to enlarge the video to the center. Don't forget to add a reference to XCeed datagrid for WPF to make this work.
First I add two new namespaces in MainWindow.xaml - The XCeed datagrid and local namespace:
xmlns:local="clr-namespace:Diary.WPF" xmlns:xceed="clr-namespace:Xceed.Wpf.DataGrid;assembly=Xceed.Wpf.DataGrid"
I add the Xceed datagrid to the grid layout:
<xceed:DataGridControl <xceed:DataGridControl.Columns> <xceed:Column <xceed:Column <xceed:Column </xceed:DataGridControl.Columns> </xceed:DataGridControl>
The grid will change the selected item when the mouse is clicked and keyboard is pressed up and down. It will trigger to refresh the video content.
private void Datagrid_MouseUp(object sender, MouseButtonEventArgs e) { RefreshContents(); } .... private void DataGrid_KeyUp(object sender, KeyEventArgs e) { if (e.Key == Key.Up || e.Key == Key.Down) { RefreshContents(); } }
I add the buttons that have a function to add event and add video.
... <Button Height="23" x:Add</Button> ... <Button x:Add new</Button>
The button will trigger an event to open a dialog window.
private void btnAddEvent_Click(object sender, RoutedEventArgs e) { ... bool? result = dlgInsert.ShowDialog(); ... } }
The method will open the new event dialog window:
private void btnAddNewVideo_Click(object sender, RoutedEventArgs e) { ... bool? result = dlgVideo.ShowDialog(); ... }
The method will open the new video dialog window:
When I click one of the videos below, the video will enlarge in the middle and play the video in full. When I click that large video, it will go back to the list below:
private void me_MouseUp(object sender, MouseButtonEventArgs e) MediaElement me = sender as MediaElement; if (stkpanVideo.Children.IndexOf(me) >= 0) { //Quick and Dirty ... me.Height *= 4; me.Width *= 4; ... } else { //Quick and Dirty ... me.Height /= 4; me.Width /= 4; ... } }
I create an XAML to act as a window dialog to add events to the table. It is Insert InsertEventDialog.xaml. I use a grid layout to create a table-like layout with textblocks and textboxes as input controls. I use a Windows Form
DateTimePicker to choose event date. I create InsertNewVideo.xaml to add a video for the selected diary event.
... ... <TextBlock .../> <TextBox .../> ... <WindowsFormsHost Name="wpfHost"> <windowsform:DateTimePicker ...</WindowsFormsHost>
I learnt how to bind data to the WPF control using the
ObservableCollection <of T>, not only
Dataset or
Datatable. I just write the property name to the binding source in the control property and automatically bind.
ASP.NET can be used as a streaming video source, so I can use it as a dynamic source for
MediaElement.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/WPF/PersonalDiary.aspx | crawl-002 | refinedweb | 1,258 | 52.66 |
Charming Python
TK programming in Python
Tips for beginners using Python's GUI library
Content series:
This content is part # of # in the series: Charming Python
This content is part of the series:Charming Python
Stay tuned for additional content in this series.
I'd like to introduce you to the easiest way imaginable to start GUI programming, namely by using Scriptics' TK and Tkinter wrapper. We'll be drawing a lot of parallels with the curses library, which I covered on developerWorks in "Curses programming in Python". Both libraries have a surprisingly similar interface despite the fact that curses targets text consoles and TK implements GUIs. Before using either library, you need a basic understanding of windows and event loops and a reference to the available widgets. (Well, a good reference and a moderate amount of practice.)
Like the article on curses, this article is limited to the features of Tkinter itself. Since Tkinter comes with many Python distributions, you probably won't have to download support libraries or other Python modules. The Related topics later in this article point to several collections of higher-level user interface widgets, but you can do a lot with Tkinter itself, including construction of your own high-level widgets. Learning the base Tkinter module will introduce you to the TK way of thinking, which is important even if you go on to use more advanced widget collections.
A brief description of TK
TK is a widely used graphics library most closely associated with the TCL language, both developed by John Ousterhout. Although TK started out in 1991 as an X11 library, it has since been ported to virtually every popular GUI. (It's as close as Python comes to having a "standard" GUI.) There are TK bindings (the Tkinter modules) for most popular languages now, as well as many of the smaller languages.
Before we begin, I must make a confession: I am no wizened expert at TK programming. In fact, the bulk of my TK programming experience began about three days before I started this article. Those three days were not without their challenges, but by the end I felt like I had a pretty good grasp of Tkinter . The moral here is that both TK and the Tkinter wrapper are extraordinarily well designed, user-friendly, and just about the easiest introduction to GUI programming out there.
Starting with a test application
As a test application we'll use a wrapper for Txt2Html, a file format conversion program used in many of my previous columns (see Related topics). Although you can run Txt2Html in several ways, the wrapper here is based on running Txt2Html from the command line. The application runs as a batch process, with command-line arguments indicating various aspects of the conversion to be performed. (Later it might be nice to offer users the option of an interactive selection screen that leads them through conversion options and provides visual feedback of selected options before performing the actual conversion.)
tk_txt2html is based on a topbar menu with drop-downs and nested submenus. Implementation details aside, it looks a lot like the curses version discussed in "Curses programming in Python". tk_txt2html and curses_txt2html are clearly in the same ballpark, even though TK accomplishes more with less code. In TK, for example, features like menus can rely on built-in Tkinter classes instead of needing to be written from scratch.
Along with setting configuration options, the TK wrapper also includes a scrolling help box built with the TK Text widget (an about box with the Message widget) and a history window that exercises TK's dynamic geometry management. And like most interactive applications, the wrapper accepts some user input with TK's Entry widget.
Let's look at the application in action now before discussing the code any further.
Learning the basics
There are really only three things that a Tkinter program has to do:
import Tkinter; # import the Tkinter module root = Tkinter.Tk() # create a root window root.mainloop() # create an event loop
This is a perfectly legitimate Tkinter program (never mind that it's useless because it doesn't even manage "hello world"). The only thing this program needs to do is create some widgets to populate its root window. Thus enhanced, our program's root
.mainloop() method call will handle all user interaction without further programmer intervention.
The main() function
Now let's look at the more realistic main() function of tk_txt2html.py. Notice that I prefer to perform John Grayson's
import Tkinter statement rather than the
from Tkinter import (see his book listed in Related topics). This is not so much because I'm worried about namespace pollution (the usual caveat for
from ... import statements), but rather because I want to be explicit about using Tkinter classes; I don't want to risk confusing them with my own functions and classes. I recommend you do the same thing, at least at the beginning.
def main(): global root, history_frame, info_line root = Tkinter.Tk() root.title('Txt2Html TK Shell') init_vars() #-- Create the menu frame, and menus to the menu frame menu_frame = Tkinter.Frame(root) menu_frame.pack(fill=Tkinter.X, side=Tkinter.TOP) menu_frame.tk_menuBar(file_menu(), action_menu(), help_menu()) #-- Create the history frame (to be filled in during runtime) history_frame = Tkinter.Frame(root) history_frame.pack(fill=Tkinter.X, side=Tkinter.BOTTOM, pady=2) #-- Create the info frame and fill with initial contents info_frame = Tkinter.Frame(root) info_frame.pack(fill=Tkinter.X, side=Tkinter.BOTTOM) # first put the column labels in a sub-frame LEFT, Label = Tkinter.LEFT, Tkinter.Label # shortcut names label_line = Tkinter.Frame(info_frame, relief=Tkinter.RAISED, borderwidth=1) label_line.pack(side=Tkinter.TOP, padx=2, pady=1) Label(label_line, text="Run #", width=5).pack(side=LEFT) Label(label_line, text="Source:", width=20).pack(side=LEFT) Label(label_line, text="Target:", width=20).pack(side=LEFT) Label(label_line, text="Type:", width=20).pack(side=LEFT) Label(label_line, text="Proxy Mode:", width=20).pack(side=LEFT) # then put the "next run" information in a sub-frame info_line = Tkinter.Frame(info_frame) info_line.pack(side=Tkinter.TOP, padx=2, pady=1) update_specs() #-- Finally, let's actually do all that stuff created above root.mainloop()
There are a number of things to note in our simple
main() function:
- Every widget has a parent. Whenever we create a widget, the first argument to the instance creation is the parent of the new widget.
- If there are any other widget creation arguments, they are passed by name. This Python feature gives us lots of flexibility in specifying options or allowing them to default.
- A number of widget instances (Frame) are global variables. We could make these local by passing them from function to function in order to maintain a theoretical purity of scope, but it would be much more trouble than it's worth. Besides, making these basic UI elements global underlines the fact that they are useful in all of our functions. But be sure to use a good naming convention for your own global variables. (As a forewarning, Pythonists seem to hate Hungarian notation.)
- After we create a widget, we call a geometry manager method to let TK know where to put it. A lot of magic goes into TK's calculation of the details, especially when windows are resized or when widgets are added dynamically. But in any case we need to let TK know which set of incantations to use.
Applying geometry managers
TK provides three geometry managers:
.pack(),
.grid() and
.place(). Only the first two are used by tk_txt2html, although
.place() can be used for fine-grained (in other words, very complicated) control. Most of the time you'll use
.pack().
You're certainly allowed to call the
.pack() method without arguments. But if you do that, you can count on the widget winding up somewhere on your display and you'll probably want to give
.pack() some hints. The most important of these will then be the
side argument. Possible values are LEFT, RIGHT, TOP, and BOTTOM (note that these are variables in the Tkinter namespace).
A lot of the magic of
.pack() comes from the fact that widgets can be nested. In particular, the Frame widget does little more than act as a container for other widgets (on occasion it shows borders of various types). So it's particularly handy to pack several frames in the desired orientations and then add other widgets within each frame. Frames (and other widgets) are packed in the order their
.pack() methods are called. So if two widgets both ask for
side=TOP, it's first come, first served.
tk_txt2html also plays a bit with
.grid(). The grid geometry manager overlays a parent widget with invisible graph-paper lines. When a widget calls
.grid(row=3, column=4), it's requesting of its parent that it be placed on the third row and on the fourth column. The parent's total rows and columns are computed by looking at the requests made by all its children.
Don't forget to apply a geometry manager to your own widgets, lest you have the rude awakening of not seeing them on your display.
Menus
Tkinter makes menus painless. Although we're working with a much simpler example here, you can, if you want, populate your menus with different fonts, pictures, checkboxes, and all sorts of fancy child widgets. In our case, the menus for tk_txt2html are all created with the line we saw above.
menu_frame.tk_menuBar(file_menu(), action_menu(), help_menu())
By itself this line might mystify as much as it clarifies. Most of the work that must be done lives in the functions called
*_menu(). Let's look at the simplest one.
def help_menu(): help_btn = Tkinter.Menubutton(menu_frame, text='Help', underline=0) help_btn.pack(side=Tkinter.LEFT, padx="2m") help_btn.menu = Tkinter.Menu(help_btn) help_btn.menu.add_command(label="How To", underline=0, command=HowTo) help_btn.menu.add_command(label="About", underline=0, command=About) help_btn['menu'] = help_btn.menu return help_btn
A drop-down menu is a Menubutton widget with a Menu widget as a child. The Menubutton is
.pack()'d to the appropriate location (or
.grid()'d, etc.). And the Menu widget has items added with the
.add_command() method. (Note the odd assignment to the Menubutton's dictionary above. Don't question this, just blindly follow me here and do the same thing in your own code.)
Getting user input
The example we're going to look at now shows how the Label widget displays output (see Related topics for the full source for some examples of the Text and Message widgets). The basic widget for field input is Entry. It's simple to use, but the technique might be a bit different from what you might expect if you've used Python's
raw_input() or curses'
.getstr() before. TK's Entry widget does not return a value that can be assigned. It instead populates the field object by taking an argument. The following function, for example, allows the user to specify an input file.
def GetSource(): get_window = Tkinter.Toplevel(root) get_window.title('Source File?') Tkinter.Entry(get_window, width=30, textvariable=source).pack() Tkinter.Button(get_window, text="Change", command=lambda: update_specs()).pack()
There are a few things to notice at this point. We've created a new Toplevel widget and a dialog box for this input, and we've specified the input field by creating an Entry widget with a
textvariable argument. But wait, there's more!
textvariable
init_vars()
main()
source = Tkinter.StringVar() source.set('txt2html.txt')
This creates an object suitable for taking user input and gives it an initial
value. This object is modified immediately every time a change is made within an
Entry widget that links to it. The change occurs for every keystroke within the
Entry widget in the style of
raw_input(), and not just upon termination
of a read.
.get()
source_string = source.get()
Wrapup
The techniques we outlined here, along with the ones we used in the full application source code, should get you started with Tkinter programming. After you play with it a bit you'll find that it's not hard to work with. One nice thing is that the TK library may be accessed by many languages other than Python, so what you learn using Python's Tkinter module is mostly transferable to other languages.
Downloadable resources
Related topics
- Fredrik Lundh has written a good tutorial for Tkinter that contains much more detail than covered here.
- A few printed books are worth checking out. The first is a good intro to TK itself. The second is specific to Python and has many examples that use the PMW collection:
- TCK/TK in a Nutshell, by Paul Raines and Jeff Tranter (O'Reilly, 1999)
- Python and Tkinter Programming by John E. Grayson (Manning, 2000)
- ActiveState has recently created a very nice Python distribution that includes Tkinter and a variety of other handy packages and modules not contained in most other distributions. (They also have an ActivePerl distribution for those inclined towards that other scripting language.)
- Scriptics (home of the maintainers and creators of TK) has been renamed.
- Read these related articles by David Mertz on developerWorks:
- Take a look at the files and articles mentioned in this article. | https://www.ibm.com/developerworks/library/l-tkprg.html | CC-MAIN-2019-30 | refinedweb | 2,207 | 56.25 |
Apache::App::Mercury::UserManager - Sample UserManager class
This is a sample class which illustrates how Apache::App::Mercury uses a user manager class to interact with your application's users. You should implement your own UserManager class with the methods described below to fit your application. Set the name of your UserManager class in the Apache::App::Mercury::Config::USER_MANAGER_CLASS variable.
Get profile information on current user (logged in to your application). Expects the calling object to know what user is logged in, and the userprofile() method to have access to that information.
Currently, userprofile() must minimally support the following values for $param (return the appropriate user information when called with that $param):
user Apache::App::Mercury user name user_desc long user description (e.g. "Fname Lname") e_mail user's e-mail address fname user's first name lname user's last name
Your userprofile() method can support more; which you can then make use of in a custom Display class, for example. You can also opt to make your userprofile() method read-write, and then make use of it elsewhere in your application. The only requirements of Apache::App::Mercury is it should return valid values for the above params for the currently logged-in user.
Get user profile information on users that exist in the application (but not necessarily logged in at the moment). Input is a list of valid user names in your application. Output should be an array of hashrefs, one for each of @users, (minimally) of the following structure:
{ user => 'userid', fname => 'First name of user', mname => 'Middle name or initial of user', #optional lname => 'Last name of user', e_mail => 'email@forward.to.addr' }
Get a list of $user's custom-defined mailboxes, or if called in set context sets the given user's custom-defined mailboxes to those specified in @update_boxes. If called in set context, return 1 for success, undef on failure.
Get name of mailbox to send transaction-related msgs to for current user. In set context (if $trans_box is given), sets mailbox to filter transaction-related msgs to. Returns 1 for success, undef on failure.
Expects the calling object to know what user is logged in, and the mail_trans_filter() method to have access to that information.
Get auto-forward setting for current user, given a security level. Security level may be one of "low", "medium", or "high". Return value is one of "message", "notify", or "none".
"message" => "send the entire message", "notify" => "send a notification", "none" => "do not send anything"
Expects the calling object to know what user is logged in, and the auto_forward() method to have access to that information.
Adi Fairbank <adi@adiraj.org>
This software (Apache::App::Mercury and all related Perl modules under the Apache::App::Mercury namespace) is copyright Adi Fairbank.
July 19, 2003 | http://search.cpan.org/dist/Apache-App-Mercury/Mercury/UserManager.pm | crawl-003 | refinedweb | 468 | 54.02 |
Varnish
Varnish is an open source, free and flexible software which is used accelerate the speed of website by caching webpage contents in memory. Varnish caches content using hash-tables which are key-value store where URL is usually taken as key.
Scenario
Set up varnish to serve only specific pages of your website from cache. The webpages should only get served from cache by varnish when the end-users are not logged in to the website. If end-users are logged in to the website and are browsing these webpages, the webpages should be served by web-server running behind the varnish.
We will start by installing varnish4.0 on Ubuntu server on which nginx will act as backend server and is already running and will configure varnish as a reverse proxy.
Varnish Installation: We will execute the following commands on terminal to install varnish.
sudo apt-get install apt-transport-https sudo curl | apt-key add - sudo echo "deb precise varnish-4.0" >> /etc/apt/sources.list.d/varnish-cache.list sudo apt-get update sudo apt-get install varnish -y
Varnish Configuration: We will follow the following steps to configure varnish as reverse proxy.
1. Stop varnish and web server:
service varnish stop service nginx stop
2. Change listening port of web server from 80 to 8080 in /etc/nginx/sites-enabled/default file.
3. Open /etc/default/varnish and change it’s listening port 6081 to 80 as shown below:
Change
DAEMON_OPTS="-a :6081 \ -T localhost:6082 / -f /etc/varnish/default.vcl / -S /etc/varnish/secret / -s malloc,256m"
Into
DAEMON_OPTS="-a :80 / -T localhost:6082 / -f /etc/varnish/default.vcl / -S /etc/varnish/secret / -s malloc,256m"
This would tell varnish to listen on 80 port.
4. Start varnish and web server:
service varnish start service nginx start
We have configure varnish as reverse proxy in front of nginx server. We could verify varnish running at port 80 and nginx at port 8080 using below command.
netstat -ntlp
Now, we will start configuring /etc/varnish/default.vcl where we will define our custom rules that will apply on the incoming client requests.
Varnish uses a language called Varnish Configuration Language ( VCL ) to define various custom rules. The syntax of VCL is similar to C or perl.
Overview of default.vcl: Before defining our custom rules, let’s first understand the structure of the default.vcl file.
This file consists of subroutines and each subroutine is called sequentially in pre-defined order set by varnish. Subroutine can be built-in or custom. Built-in subroutines start with “vcl_” and custom subroutines can not start with “vcl_”. Every subroutine is ended with return statement which can have recv, fetch, pass, miss, hit, deliver, pipe or hash arguments defining the next action. Each parameter has different meaning and are not available in every subroutine.
In the file, first we need to tell VCL compiler which version of varnish we are using. Then we import the Varnish Modules which would be getting used in the file and later we define the backend server.
cat /etc/varnish/default.vcl vcl 4.0; import std; import directors; backend default { .host = "127.0.0.1"; .port = "8080"; }
There are total of 14 built-in subroutines. We will be going through only six subroutines which are required to perform our scenario.
1. vcl_recv: This subroutines is triggered at the beginning of the request. Here, we decide whether and how to let varnish to handle it and or to pass it to backend server. Every statement added into the subroutine is explained in comments just above it.
sub vcl_recv { # Define the backend server first using “req” object # which is created every-time varnish receives the request. set req.backend_hint = default; # By-pass all authentication requests to backend server. # “pass” argument in return statement passes the request # to vcl_pass sub routine which ultimately is passed to # backend server. if (req.http.Authorization || req.http.Authenticate){ return(pass); } # http.X-Requested-With variable checks the ajax requests # to pass then to backend server. if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache") { return (pass); } # Pass requests to backend if req.url contains the any of # the below string anywhere in URL. if (req.url ~ "/(checkout|customer|catalog/product_compare|wishlist)/") { return(pass); } # Pass requests to backend if method in request is not # GET and HEAD. if (req.method != "GET" &amp;&amp; req.method != "HEAD") { return (pass); } # req.http.Cookie contains cookies. # We pass request to backend server someone is logged # into the website and we don't want to # varnish to server anything from cache if someone is # logged into the website. if (req.http.Cookie ~ "CUSTOMER_AUTH"){ return(pass); } # hash paramater call vcl_recv subsoutine to server the # mentioned URL from cache. if (req.url ~ "^/$" || req.url ~ "/footwear/*" || req.url ~ "^/accessories/*") { return(hash); } }
2. vcl_hash: It is called by vcl_recv. This returns lookup and searches the object from cache and eventually calls vcl_hit or vcl_miss subroutines. If object is present in the cache vcl_hit is called other wise vcl-miss is called.
sub vcl_hash { # This functions stores the url as key in varnish. hash_data(req.url); # If the host is set,stores it else store server ip. if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } #looks up the cache and call vcl_hit or vcl_miss. return (lookup); }
3. vcl_hit: This is called when the object is present in the cache.
sub vcl_hit { # Called when a cache lookup is successful. # Here ,we have user obj object instead of req # object because vcl_hash passes obj to it.(more info) # obj.ttl is object's remaining time to live and # if it is greater than zero, call vcl_deliver subroutine. if (obj.ttl >= 0s) { return (deliver); } }
4. vcl_miss: This is called when the object is not present in the cache and it will fetch the object from the backend and stores it in the cache and then serves the request.
sub vcl_miss { return (fetch); }
5. vcl_backend_response: This subroutine is called after a request is fetched from the backend i.e. after vcl_miss and before delivering the fetched object from backend. We set the TTL variable on the object to 1 hour. TTL value can be in seconds (3600s), minutes (60m) or hours (1h).
sub vcl_backend_response { # This subroutine works on beresp object. # Set TTL of object for 1 hr. set beresp.ttl = 60m; # Allow stale content, in case the backend goes down. # make Varnish keep all objects for 6 hours beyond their TTL set beresp.grace = 6h; #deliver argument calls vcl_deliver subroutine. return(deliver); }
6. vcl_deliver: This is called just before passing the cached object to client. We can modify header to be passed to the client.
sub vcl_deliver { #Adding a debug header to know if current request is a hit or a miss. if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } # Set the cache control headers here. set resp.http.Cache-Control = "no-store, must-revalidate, post-check=0, pre-check=0"; #Update the cache hits. set resp.http.X-Cache-Hits = obj.hits; # Some headers can be removed if not required. # Have added some headers below in comments. # Remove some headers: PHP version # unset resp.http.X-Powered-By; # Remove some headers: Apache version &amp; OS # unset resp.http.Server; # unset resp.http.X-Drupal-Cache; # unset resp.http.X-Varnish; # unset resp.http.Via; # unset resp.http.Link; # unset resp.http.X-Generator; # Finally, this will deliver the response back to the client. return (deliver); }
References:
VCL Syntax:
High Hit rate:
Rariable list:
Return params:
Authentication:
—
Thanks,
Navjot Singh
Team AWS, Intelligrape | http://www.tothenew.com/blog/varnish/ | CC-MAIN-2018-05 | refinedweb | 1,316 | 59.9 |
Does your Oculus VR development have mysterious problems? Has your development stream come to a screeching halt because of some lack of operation? Have you inadvertently developed yourself in a dark, dark foreboding corner? For you see or don't see the Oculus does not have a runtime debug monitor console(no arguments, please. Just follow along here). Wouldn't it be nice to see data while in the headset running your tests? Or how about listing sql database data ad infinitum? Or dumping the controller and headset to a panel that follows your gaze so you can SEE what is under the cover of darkness? Well have no fear. I have come to the rescue. Yes, I am posting a question and answering it too. Oh downgrade me in the range of multiplicity those self proclaimed prodigies of nothingness. This answer is here and not on stackanything for fear of rejective impulses by the group masochists. Here ya go:
//*************************************************************************
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using TMPro;
public class MonitorThis : MonoBehaviour
{
private TextMeshPro textMesh;
// Start is called before the first frame update
void Start()
{
textMesh = gameObject.GetComponent<TextMeshPro>();
if(textMesh == null) Debug.Log("No monitor");
//textMesh.text = "";
}
public void MonitorLine(string labelStr)
{
textMesh.text = textMesh.text +
labelStr + "\n";
}
}
//*************************************************************************
// Create Gameobject tagged "CollisionMonitor"
// Create component/Mesh/TextmeshPro Text
//Position where you want in scene
//*************************************************************************
//Then in calling routine add this:
using TMPro;
private MonitorThis m_MonitorThis;
GameObject m_collisionMonitor = GameObject.FindWithTag("CollisionMonitor"); //Gameobject with TexMeshPro
if (m_collisionMonitor != null)
{
m_MonitorThis = m_collisionMonitor.GetComponent<MonitorThis>();
}
//Then calling routine can execute:
m_MonitorThis.MonitorLine("\nCollision Time: " + Time.realtimeSinceStartup.ToString());
//If in Update() you will slow down game but see lots of data. This can be very beneficial.
Enjoy. Oh wait till your see the massive amounts of data streaming in front of your eyes! Gleefully relish the towering edifices of your data accumulating before your very eyes! Your life will never be.
Saving to a database
1
Answer
What is the most appropriate way to store and search a large csv on a mobile device?
0
Answers
Can I download and then read a CSV all at runtime? If so, how?
1
Answer
Linq Query - Strange Outcome
1
Answer
Importing large amounts of data
0
Answers | https://answers.unity.com/questions/1668228/do-you-need-a-realtime-data-monitor-for-unity-in-y.html?sort=oldest | CC-MAIN-2020-16 | refinedweb | 372 | 52.36 |
I stumbled across the pywebview project a couple of weeks ago. The pywebview package “is a lightweight cross-platform wrapper around a webview component that allows to display HTML content in its own native GUI window.” It uses WebKit on OSX and Linux and Trident (MSHTML) on Windows, which is actually what wxPython’s webview widget also does. The idea behind pywebview is that it provides you the ability to load a website in a desktop application, kind of Electron.
While pywebview claims it “has no dependencies on an external GUI framework”, on Windows it requires pythonnet, PyWin32 and comtypes installed. OSX requires “pyobjc”, although that is included with the default Python installed in OSX. For Linux, it’s a bit more complicated. On GTK3 based systems you will need PyGObject whereas on Debian based systems, you’ll need to install PyGObject + gir1.2-webkit-3.0. Finally, you can also use PyQt 4 or 5.
You can use Python micro-web frameworks, such as Flask or bottle, with pywebview to create cool applications using HTML5 instead of Python.
To install pywebview itself, just use pip:
pip install pywebview
Once installed and assuming you also have the prerequisites, you can do something like this:
import webview webview.create_window('My Web App', '')
This will load the specified URL in a window with the specified title (i.e. the first argument). Your new application should end up looking something like this:
The API for pywebview is quite short and sweet and can be found here:
There are only a handful of methods that you can use, which makes them easy to remember. But since you can’t create any other controls for your pywebview application, you will need to do all your user interface logic in your web application.
The pywebview package supports being frozen using PyInstaller for Windows and py2app for OSX. It also works with virtualenv, although there are known issues that you will want to read about before using virtualenv.
Wrapping Up
The pywebview package is actually pretty neat and I personally think it’s worth a look. If you want something that’s a bit more integrated to your desktop, then you might want to give wxPython or PyQt a try. But if all you need to do is distribute an HTML5-based web app, then this package might be just the one for you. | http://www.blog.pythonlibrary.org/2017/04/25/getting-started-with-pywebview/ | CC-MAIN-2018-13 | refinedweb | 397 | 60.55 |
Message Correlation using JMS by Martien van den Akker
By Juergenkress-Oracle on Aug 08, 2014
Last year I created a few OSB services with the asynchronous request
response message exchange pattern. OSB does not support this out of the
box, since OSB is in fact synchronous in nature. Although OSB supports
the WS - Addressing namespaces, you need to set the WS-Addressing
elements programmatically.
Since OSB is synchronous the request and response flows in the Asynchronous Request/Response pattern are completely seperated implemented from eachother. That means that in the response flow you don't know what request message was responsible for the current response. Even worse: you don't know what client did the request and how to respond to that client in a way you can correlate to the initating instance. Using SOA/BPM Suite as a client, you want to correlate to the requesting process instance.
There are of course several ways to solve this. I choose to use a Universal Distributed Queue for several reasons, where knowledge of JMS and performance were a few. I only need to temporarly store a message against a key. Coherence was not on my CV yet. And a database table requires a database(connection) with the query-overhead, etc.
Unfortunately you can't use the OSB transports or SOASuite JMS adapters to get/browse for a message using a correlation-id in a synchronous way. When you create a proxy service on a jms transport or configure a JMS Adapter for read it will be a polling construction. But it's quite easy to do it in Java, so I created a java-method to get a message based on a CorrelationId.
One thing I did not know back then was that if you put a message on the queue from one OSB Server Node (having a JMS Server) it can't be read from the other node, as such. Messages are stored in the local JMS Server member of the Queue.
I
found that you can quite easily reach the local member of a Universal
Distributed Queue on a certain JMSServer on Weblogic by prefixing the
JNDI name of the queue with the JMSServer separated with the at | https://blogs.oracle.com/soacommunity/entry/message_correlation_using_jms_by | CC-MAIN-2015-14 | refinedweb | 370 | 59.84 |
In this section, we'll explore the architecture, core functionality, and UI support in the Compact Framework and compare it with the desktop Framework. In this way, developers and technical managers can quickly get a feel for the technology involved in developing mobile applications using the Compact Framework.
You may recall that one of the design goals of the Compact Framework was to create a "portable (and small) subset of the desktop Framework, targeting multiple platforms." To support this goal Microsoft created the architecture shown in Figure 2-3. In this section we'll walk through the components of that architecture from the bottom up to describe how each contributes to this goal.
The size of the Compact Framework installed on the device varies from device to device. Generally, it ranges from 1.7MB to 2.6MB and can be installed in RAM, ROM, or FlashROM. As you might expect, the initial release is installed in RAM so that it is immediately available to all devices. However, in the future, expect OEMs to offer FlashROM upgrades that include the Compact Framework for devices like the Pocket PC 2002 (where FlashROM is already present). Future devices (i.e., Pocket PC 2003 devices) will likely ship with the Compact Framework in ROM.
Obviously, at the lowest level, the code written for the Compact Framework must be executed on a host operating system such as Windows CE. At this time, the Compact Framework will run on Windows CE 3.0 and Windows CE .NET 4.1, although as we'll see, the architecture in Figure 2-3, particularly through the inclusion of the PAL and the NSLs, lends itself to portability to other host operating systems as well.
The PAL is the primary component that makes platform portability possible. Essentially, the PAL contains a variety of subsystems that expose the functionality of the underlying operating system and hardware in a consistent set of APIs to the NSL and EE, as shown in Figure 2-3. For example, the PAL includes interfaces for device drivers, a system memory manager, interrupts and timers, multimedia, and I/O ports, among others. All of these subsystems must be fully implemented on the target device.
As a result, in order to port the Compact Framework between devices, OEMs must rewrite the PAL to make native calls on the target operating system and to the hardware. Of course, depending on the features of the device and what its native operating system supports, the functions of the PAL may or may not map to the operating system in a straightforward manner. For example, the PAL was designed with Windows CE and the Pocket PC in mind, and so, many of its APIs simply map directly to APIs exposed by Windows CE.
In summary, you can think of the PAL as the equivalent of a device driver used by the Compact Framework that abstracts and drives the underlying operating system and its hardware.
Because not all devices support the same set of services, the Compact Framework also includes a set of NSLs that implement features that the Compact Framework requires, including file system operations, heap management, globalization, cryptography, and graphical user interface (GUI) manipulation.
These services then make calls into the PAL to perform their operations and are in turn called by the EE, as shown in Figure 2-3. A typical example is the GUI support implemented as an NSL that is then exposed by the classes in the System.Windows.Forms namespace in the Compact Framework class library.[10] As you might expect, the NSLs can also be called by other native code running on the device, thereby substantially increasing the feature set available to other unmanaged applications as well.
[10] This NSL is implemented by the file Netcfagl1_0.dll installed on the device. On Windows CE, this Advanced Graphics Library interfaces with the Graphics, Windowing, and Event Subsystem (GWES), as shown in Figure 1-2.
The addition of NSLs levels the playing field for devices that do not support these core features, making the Compact Framework capable of running on a wide variety of devices.
Because the NSLs make use of the PAL, OEMs do not have to port the code to implement them as they do with the PAL. The NSLs can simply be compiled for the target platform.
The EE in Figure 2-3 provides essentially the same set of services that the common language runtime does for desktop and server applications shown in Figure 2-2 by managing the execution of a .NET application. However, because it performs these functions in an environment where resources are scarce (on devices with less memory and a slower CPU), the EE was designed from the ground up with these constraints in mind and, as a result, performs some of them differently. Even so, the core technology, like the desktop Framework, still conforms to the ECMA-335 specification. The EE was written in C and is implemented in two DLLs, Mscoree.dll (the stub) and Mscoree1_0.dll (the bulk of the EE), which, like the NSLs, are compiled for the target platform per CPU and operating system.[11]
[11] Typically, the EE ranges in size from 400K to 500K depending on the operating system and CPU architecture.
To get a better understanding of how the EE does its work, the following list explicates some of the core functionality in the order it is encountered during the execution of a managed application:
Class Loader: As with the desktop Framework, code executed by the Compact Framework must have been previously compiled into MSIL instructions and placed in an assembly (a PE file) on the device. The compilation occurs on the developer's PC using SDP, as explained later in this chapter. As the name implies, the job of the Class Loader is to locate and load the assemblies required to execute an application. However, before the Class Loader can do its work, the application must be activated at the operating system level, which occurs when the Compact Framework application is executed by the host operating system.[12] At that time, a process is created, and Mscoree.dll (and subsequently Mscoree1_0.dll) is loaded by the operating system[13] into the process. At this point an Application Domain is created, and the EE takes over execution of the application within the domain running in the operating system process.[14] As with the desktop Framework, Application Domains serve as a means to isolate Compact Framework applications running within the same process and can therefore be thought of as "lightweight processes." Once the EE has been invoked, the Class Loader can then do its job by loading the set of assemblies, with the required versions, necessary to execute the application. It does this by inspecting the metadata in the assembly that includes the information about dependent assemblies. The list of required assemblies can (and often does) include both custom assemblies that developers create and assemblies that ship with the Compact Framework, such as System.Windows.Forms.[15] The Compact Framework Class Loader uses a simpler scheme for binding than does the desktop Framework. In short, the Class Loader looks at the major and minor version number (of the four-part naming scheme) of the referenced assembly and will load it as long as they are the same as the version the calling assembly was compiled with. The Class Loader also supports side-by-side execution, which means that a Compact Framework application always runs on the version of the EE with which it was compiled.
[12] For example, the Windows CE PE Loader.
[13] For example, using the Windows CE LoadLibrary API.
[14] The APIs required to create and manage Application Domains within a custom host process are not documented in the initial release of the Compact Framework. It is also not possible to load assemblies into a domain-neutral code area for use by multiple Application Domains.
[15] The Compact Framework, however, does not support multifile assemblies as the desktop Framework does.
Type Checker: After the Class Loader has loaded the required assemblies, the Type Checker is invoked to determine if the MSIL code is safe to execute. In this way, the Compact Framework provides the same verifiably type-safe execution as the desktop Framework, for example, by making sure that there are no uninitialized variables, that parameters match their types, that there are no unsafe casts, that the array indexes are within bounds, and that pointers are not out of bounds.
JIT compiler: Once the Type Checker has verified the code and completed successfully, the MSIL code can be JIT-compiled to native instructions on the CPU. And, as with the desktop Framework, the compilation occurs on a method-by-method basis as each method is invoked. However, the Compact Framework JIT compiler must be especially sensitive to the resource constraints of the device and so uses a code-pitching technique to free blocks of memory when resources are low. This works by marking sections of JIT-compiled code that were recently executed and then allowing the least recently executed blocks to be reclaimed in a process similar to that used by a GC. As with most things, this too is a trade-off because MSIL code must be recompiled if it is subsequently executed. However, using this technique typically ensures that the core working set of the application stays natively compiled in memory. It should be noted that the Compact Framework does not support compiling an entire application to native code at install time using the native code generation (Ngen.exe) command-line utility as the desktop Framework does.
Thread support: As a Compact Framework application runs, it can gain access to underlying operating system threads through the Thread class in the System.Threading namespace. This allows Compact Framework developers to create applications that appear more responsive by offloading work (for example, a call to an XML Web Service) to background threads, while waiting for stylus input from the user.[16] It should be noted that the Compact Framework was designed to coexist peacefully with the host operating system and so relies on native operating system threads and synchronization primitives. As a result, operating system scheduling priorities also apply to Compact Framework applications, and threads produced by the EE can coexist with native threads in the same process. In addition, the Compact Framework includes a thread pool (System.Threading.ThreadPool) that allows a developer to queue a method for execution on one of a number of background worker threads controlled by the EE. When a thread in the pool is free, the method will execute and, when finished, can notify the main thread through a callback.
[16] The Application Domain, in which a multithreaded Compact Framework application runs, will exist until all of the created threads have exited.
Exception handling: During the execution of an application, unforeseen events sometimes transpire. To handle these gracefully, the Compact Framework supports structured exception handling (SEH), as does the desktop Framework. This allows developers to use Try-Catch semantics in their code and to test for specific types of exceptions thrown by the application. And, as with its desktop cousin, the EE of the Compact Framework is optimized for the nonexceptional case, and so throwing exceptions should be reserved for true exceptions and not simply to signal a normal occurrence. The key difference in exception handling in the Compact Framework is that the error strings are actually stored in a separate assembly, System.SR.dll. This is due to the resource constraints of devices and allows the developer optionally to install this assembly on the device. If present, the EE will load it and display the appropriate message, and, if not, a default message will be displayed.
GC: One of the most discussed features of the common language runtime is the GC, which is responsible for managing memory by collecting and deallocating objects that are no longer used. As you might expect, the design of the GC is especially important in the constrained environment of a mobile device, and, for that reason, it differs from the GC implemented in the desktop Framework. The GC in the Compact Framework consists of an allocator and a collector. The allocator is responsible for managing the object pools that provide storage for the instance data associated with an object, while the collector implements the GC algorithm. At a high level, the collector runs on a background thread when resources are low and, while working, freezes all other active threads at a safe point. It then finds all reachable objects by traversing the various thread call stacks and global variables and marks them. The collector then frees all the unmarked objects and executes their finalizers.[17] Finally, the object pools are compacted, which returns free memory to the global heap. This approach is referred to as a "mark-and-sweep approach" and does not use the concepts of generations or implement a finalization queue, as does the more complex GC of the desktop Framework.
[17] Finalizers are the destructors associated with an instance of a class. Destructors are often used explicitly to free resources but are not required.
In addition to the features discussed here, the EE also provides other services that will be discussed in more detail in the following section, including exception handling, native code interoperation, and debugging.
In order to create a robust programming environment for devices, the Compact Framework ships with a set of class libraries in assemblies organized into hierarchical namespaces similar to those found in the desktop Framework described previously and shown in Figure 2-3. However, there are four major differences between the class libraries shipped with the desktop Framework and those included in the Compact Framework.
ASP.NET: Because the Compact Framework is designed to support applications that execute on the device, it does not include any support for building Web pages hosted on a Web server running on the device. This means that the classes of the System.Web namespace familiar to ASP.NET developers are not found in the Compact Framework. To write Web applications that can be accessed by a mobile device, use the ASP.NET Mobile Controls as discussed in Chapter 1.
COM Interop: Because the Windows CE operating system and the eVC++ tool support creating COM components and ActiveX controls, it would be nice if the Compact Framework supported the same COM Interop functionality (complete with COM callable wrappers and interop assemblies) as does the desktop Framework. Unfortunately, COM Interop did not make it into the initial release of the Compact Framework. However, it is possible to create a DLL wrapper for a COM component using eVC++ and then to call the wrapper using the Platform Invoke (PInvoke) feature of the Compact Framework, which allows native APIs to be called. Examples of using PInvoke can be found throughout this book, but especially in Chapter 11.
OleDb access: The Compact Framework omits the System.Data.OleDb namespace and so does not support the ability to make calls directly to a database using the OleDb .NET Data Provider. However, the remote data access (RDA) features of SQL Server CE do support pulling data down from a SQL Server that can act as a repository for data from other data sources, as discussed in Chapter 7.
Generic serialization: The desktop Framework supports binary and SOAP serialization of any object through the use of the Serializable attribute, the ISerializable interface, and the XmlSerializer class in the System.Xml.Serialization namespace. This functionality is not supported in the Compact Framework. However, the Compact Framework does support serializing objects to XML for use in XML Web Services and serializing DataSet objects to XML as discussed in Chapter 3.
Asynchronous delegates: Delegates in both the desktop Framework and Compact Framework can be thought of as object-oriented function pointers. They are used to encapsulate the signature and address of a method to invoke at runtime. While delegates can be called synchronously, they cannot be invoked asynchronously and passed a call back method in the Compact Framework. However, it should be noted that asynchronous operations are supported for some of the networking functionality found in the System.Net namespace and when calling XML Web Services described in Chapter 4. In other cases, direct manipulation of threads or the use of a thread pool is required as described in Chapter 3.
Application configuration files: The desktop Framework includes a ConfigurationSettings class in the System.Configuration namespace. This class is used to read application settings from an XML file associated with the application and called appname.exe.config. The Compact Framework does not support this class, but developers can write their own using the classes in the System.Xml namespace discussed in Chapter 3. An example class of this type can be found in the book by Wigley and Wheelright referenced in the "Related Reading" section at the end of the chapter.
.NET remoting: In the desktop Framework, it is possible to create applications that communicate with each other across application domains using classes in the System.Runtime.Remoting namespace. This technique allows for data and objects serialized to SOAP or a binary format to be transmitted using TCP or HTTP.[18] This functionality is not supported (in part because generic serialization is not supported) in the Compact Framework, where, instead, XML Web Services and the Infrared Data Association (IrDA) protocol can be used, as discussed in Chapter 4.
[18] See Chapter 8 of Building Distributed Applications with Visual Basic .NET for an overview of .NET Remoting.
Reflection emit: Although the Compact Framework does support runtime type inspection using the System.Reflection namespace,[19] it does not support the ability to emit dynamically created MSIL into an assembly for execution.
[19] For example, to create objects dynamically at runtime using the Activator.CreateInstance method.
Printing: Although the Compact Framework does support graphics and drawing through a subset of the GDI+ functionality of the desktop Framework, it does not support printing through the System.Drawing.Printing namespace.[20]
[20] The most popular third-party printing software is the PrinterCE SDK from Field Software Products (). Look for a Compact Framework version of their SDK in the near future.
XPath/XSLT: Support for XML is included in the Compact Framework and allows developers to read and write XML documents using the XmlDocument, XmlReader, and XmlWriter classes, as discussed in Chapter 3. However, it does not support executing XPath queries or performing XML Stylesheet Language (XSL) transformations.
Server-side programming models: As you would expect, in addition to those shown in Figure 2-4, the Compact Framework also does not support the server-side programming models, including System.EnterpriseServices (COM+),[21] System.Management (Windows Management Instrumentation, or WMI),[22] and System.Messaging (Microsoft Message Queue Server, or MSMQ).[23]
[21] Or Component Services that enable .NET components to access services such as distributed transactions, object pooling, and loosely coupled events.
[22] Used to monitor services on a Windows machine.
[23] Used to create asynchronous message-based applications.
Multimodule assemblies: The desktop Framework supports the ability to deploy an assembly as a collection of files. This is useful for creating assemblies authored with multiple languages. This feature is not supported in the Compact Framework where a single file (.exe or .dll) represents the entire assembly.
The second major difference between the desktop class libraries and those included in the Compact Framework is how they are factored into assemblies. Simply put, in the Compact Framework the 14 assemblies that comprise the class libraries are more granular than those found in the desktop Framework. For example, in the desktop Framework, the classes of the System.Data.SqlClient namespace used to access SQL Server are included in the System.Data.dll assembly, whereas in the Compact Framework they are factored into their own assembly. In this way, the Compact Framework can support a smaller footprint if applications installed on the device do not require some of the Compact Framework class libraries.
The third major difference is that the Compact Framework supports two additional namespaces (each shipped in its own assembly) that expose functionality particular to smart devices, as shown in Table 2-1.
In addition, support for the IrDA protocol has been included in the Compact Framework and exposed in the six classes found in the System.Net.Sockets and the System.Net namespaces, as covered in Chapter 4.
Finally, the Compact Framework supports a subset of the core types supported in the desktop Framework. These are often referred to as the Common Type System (CTS) types because they are the foundational types. Table 2-2 presents the CTS types found in the desktop Framework and their support in the Compact Framework. Note that in each case, some of the overloaded methods and properties found in the desktop Framework are not supported in the Compact Framework.
Developers familiar with eVB will no doubt note the vast difference between the strongly typed Compact Framework environment and eVB, where all variables are of the type Variant. This not only makes source code easier to read and avoids runtime errors, it also saves memory because in eVB each variable consumes a minimum of 16 bytes, whereas the types in the Compact Framework consume fewer; for example, System.Int32 is four bytes.
In all, the Compact Framework class libraries provide a wealth of functionality that allows developers to create robust applications for devices.
All development teams would like to be able to leverage the code they write by reusing it in as many scenarios as possible, and the developers of Compact Framework code are no exception. There are four key scenarios where portability must be addressed: device to device, desktop to device; device to desktop, and eMbedded Visual Tools to Compact Framework.
Although devices targeted for the Compact Framework span a variety of hardware manufacturers and processors, the architecture illustrated in Figure 2-3 allows Compact Framework applications to be moved between devices without recompilation. This is the case for two reasons. First, the MSIL code placed in the assembly by the compiler is machine-independent, allowing the JIT compiler of the EE to compile to native instructions for execution. In addition, the system assemblies contain the same set of classes, are factored identically, and are versioned identically on all platforms. This allows your development team to create a single binary to support multiple devices running on multiple CPUs (x86, SH3, ARM, MIPS), all of which are loaded with the Compact Framework.
In this scenario the caveat is that Compact Framework applications also support several platforms (Pocket PC 2000, 2002, Windows CE .NET 4.1) and any particular application may rely on platform-specific features, for example, the InputPanel class to control the SIP on Pocket PC. In these cases, the application would need to be modified to remove the unsupported features and recompiled before executing it on another platform. Not doing so may result in exceptions being thrown or unpredictable behavior. For example, if a Compact Framework application targeted for Windows CE .NET 4.1 attempts to display the SIP using the InputPanel class, the EE throws a NotSupportedException. To work around these issues, it is possible to determine programmatically the platform using a native API call as discussed in Chapter 11.
Even in the simplest case, if an assembly is created in the desktop Framework and then referenced in an SDP in VS .NET, both a warning dialog and compiler errors will result, indicating that the mscorlib assembly referenced by the desktop Framework assembly differs from that referenced by the SDP. For this reason, it is recommended that all desktop Framework assemblies for use on the Compact Framework first be recompiled.
Interestingly, it is possible to load a desktop Framework assembly on the Compact Framework using the Assembly class of the System.Reflection namespace. However, because the Compact Framework does not support late binding, invoking the methods requires more runtime type inspection (using the Type and MethodInfo classes) and is therefore unwieldy at best.
This scenario has much in common with the previous one. Although the Compact Framework uses the same standard PE file format, header, and metadata as that used by the desktop Framework, applications created for the Compact Framework will be designed for the constraints of the device and will use platform-specific assemblies (Microsoft.WindowsCE.Forms) and functionality (IrDA). For this reason, most Compact Framework code will need to be modified and recompiled in a desktop Framework project for execution on the desktop.
Of course, because of the UI, memory, and other constraints of devices, this sort of binary compatibility will likely be useful only for developing custom code libraries (business logic, sorting, searching, data access, string, and file manipulation, for example).
As mentioned in Chapter 1, before the availability of the Compact Framework, developers creating applications for smart devices used eVC++ and eVB that together are referred to as the eMbedded Visual Tools. However, because the Compact Framework uses an entirely new EE, class libraries, along with a new IDE and language syntax, porting an application written in eVB will require a substantial rewrite. And unlike in the desktop Framework, there is no tool available in VS .NET to assist in the upgrade process. For this reason, development teams will likely port only eVB applications to Compact Framework when adding significant additional functionality, for example, by adding support for XML Web Services. | http://etutorials.org/Programming/building+solutions+with+the+microsoft+net+compact+framework/Part+1+The+PDA+Development+Landscape+with+the+Compact+Framework/Chapter+2.+Components+of+Mobile+Development/The+.NET+Compact+Framework/ | crawl-001 | refinedweb | 4,239 | 50.77 |
#include <dc1394_stereo.h>
#include <dc1394_stereo.h>
Inherits bj::Stereo.
Inheritance diagram for bj::DC1394Stereo:
DC1394Stereo class provides an interface to firewire (IEEE1394) stereo heads (i.g. STH-MDCS2 Stereo Head by Videre Design).
The synchronized stereo images are encoded into YUV422 format: left image on Y plane, and right image on U & V plane.
"/dev/video1394/0"
640
480
0
30
BAYER_NONE
A constructor.
Open a video device, and initialize the device using default values.
A destructor.
[inline]
Query the current Bayer tiling format.
[virtual]
Capture a pair of images, and store them in IplImage format.
Implements bj::Stereo.
false
Capture a pair of images.
Query the length of PNM data excluding an header.
[protected]
Decode bayer tiling.
Decode YUV422 stereo coding.
[inline, virtual]
Query the number of frames per second.
Reimplemented from bj::Stereo.
Query the length of PNM header.
Query the height:
Set a Bayer tiling format.
Make the camera in a specified channel start to send images.
Make the camera in a specified channel stop sending images.
Query the stereo camera type.
Query the width of input images. | http://robotics.usc.edu/~boyoon/bjlib/da/d23/classbj_1_1DC1394Stereo.html | CC-MAIN-2014-42 | refinedweb | 181 | 71.71 |
I'm a beginner in Python, and tried to take MIT 6.00, the page provided is the assignments page.
I'm at assignment 2, where i have to find a solution for Diophantine equation, i'm really not that great in math, so i tried to understand what it does as much as i can, and think of a solution for it.
Here's what i got to :
def test(x):
for a in range(1,150):
for b in range(1,150):
for c in range(1,150):
y = 6*a+9*b+20*c
if y == x:
print "this --> " , a, b, c
break
else : ##this to see how close i was to the number
if y - x < 3:
print a, b, c , y
50, 51, 52, 53, 54, and 55
50, 53 and 55
The assignment says:
To determine if it is possible to buy exactly n McNuggets, one has to solve a Diophantine equation: find non-negative integer values of a, b, and c, such that 6a + 9b + 20c = n.
It seems that you have to include zero in the ranges of your function. That way, you can find solutions for all the numbers you need. | https://codedump.io/share/tc5hlPfXd29r/1/solving-three-variables-diophantine-equation-in-python | CC-MAIN-2017-09 | refinedweb | 200 | 74.56 |
How to Set Up a GraphQL Server with Apollo Server and Express
May 17th, 2021
What You Will Learn in This Tutorial
How to properly configure and handle requests to a GraphQL server using the Apollo Server library in conjunction with an existing Express.js server.
Table of Contents
Master Websockets — Learn how to build a scalable websockets implementation and interactive UI.
Getting started
To get started, we're going to rely on the CheatCode Node.js Boilerplate. This will give us an already setup GraphQL server to work with and add context to the explanations below. First, clone the boilerplate via Github:
Terminal
git clone
cd into the cloned
nodejs-server-boilerplate directory and install the dependencies:
Terminal
cd nodejs-server-boilerplate && npm install
Next, let's manually add the
apollo-server dependency (this is different from the
apollo-server-express dependency that's already included in the boilerplate—we'll look at this later):
Terminal
npm i apollo-server
Once this is complete, all of the dependencies you need for the rest of the tutorial will be installed. Now, to start, let's take a look at how to set up a basic GraphQL server with Apollo Server.
Setting up the base server
To get started, we need to import two things as named exports from
apollo-server, the
ApolloServer constructor and the
gql function.
/api/graphql/server.js
import { ApolloServer, gql } from "apollo-server"; // We'll set up our server here.
To create a server, next, we create a new instance of
ApolloServer with
new ApolloServer():
/api/graphql/server.js
import { ApolloServer, gql } from "apollo-server"; const server = new ApolloServer({ playground: true, typeDefs: gql` type Example { message: String } type Query { queryExample: Example } type Mutation { mutationExample: Example } `, resolvers: { Query: { queryExample: (parent, args, context) => { return { message: "This is the message from the query resolver.", }; }, }, Mutation: { mutationExample: (parent, args, context) => { console.log("Perform mutation here before responding."); return { message: "This is the message from the mutation resolver.", }; }, }, }, });
We've added a lot here, so let's step through it. First, we create a variable
server and set it equal to the return value of calling
new ApolloServer(). This is our Apollo Server instance. As an argument to that constructor to configure our server, we pass an object with three properties:
playground,
typeDefs, and
resolvers.
Here,
playground is assigned a boolean
true value that tells Apollo Server to enable the GraphQL Playground GUI at
/graphql when the server is running. This is a handy tool for testing and debugging your GraphQL API without having to write a bunch of front-end code. Typically, it's good to limit usage of the playground to only your development
NODE_ENV. To do that, you can set
playground here to
process.env.NODE_ENV === 'development'.
Next, the
typeDefs and
resolvers properties here, together, describe the schema for your GraphQL server. The former,
typeDefs is the part of your schema where you define the possible types, queries, and mutations that the server can handle. In GraphQL, there are two root types
Query and
Mutation which can be defined alongside your custom types (which describe the shape of the data returned by your queries and mutations) like
type Pizza {}.
Above, we've spec'd out a full example schema. First, notice that we've assigned our
typeDefs value equal to
gql`` where
gql() is a function that expects a single argument as a string. The syntax here (without parentheses following the
gql) is a built-in feature of JavaScript that allows you to simultaneously invoke a function and pass it a string value at the same time. To be clear, the above is equivalent to
gql(´´). Using this syntax requires that the string value passed is done as a template literal (meaning, a string defined using backticks as opposed to single or double quotes).
The
gql´´ function itself is responsible for taking a string containing code written in the GraphQL DSL (domain-specific language). DSL, here, refers to the unique syntax of the GraphQL language. When it comes to defining our schema, we have the option of writing it in the GraphQL DSL. The
gql`` function takes in that string and converts it from the DSL into an abstract syntax tree (AST) which as an object that describes the schema in a format GraphQL can understand.
Inside the string we pass to
gql(), first, we've include a data type as
type Example which defines a custom
type (not the built-in
Query or
Mutation types) which describes an object containing a
message field whose value should be a
String. Next, we define the root
Query type and
Mutation type. On the root
Query type, we define a field
queryExample (which we expect to pair with a resolver function next) which we expect to return data in the shape of the
type Example we just defined. Next, we do the same for our root
Mutation type, by adding
mutationExample and also expecting a return value in the shape of
type Example.
In order for this to work, we need to implement resolver functions in the
resolvers object (passed to our
ApolloServer constructor). Notice that here, inside of
resolvers we've defined a
Query property and a
Mutation property. These intentionally mimic the structure of
type Query and
type Mutation above. The idea here is that the function
resolvers.Query.queryExample will be called whenever a query is run on the
queryExample field from a client (browser or native app), fulfilling or resolving the query.
The same exact thing is taking place at
resolvers.Mutation.mutationExample, however here, we're defining a mutation (meaning, we expect this code to change some data in our data source, not just return some data from our data source). Notice that the shape of the object returned from both the
queryExample resolver and
mutationExample resolver match the shape of the
type Example we defined earlier. This is done because, in our root
Query and root
Mutation, we've specified that the value returned from those resolvers will be in the shape of the
type Example.
/api/graphql/server.js
import { ApolloServer, gql } from "apollo-server"; const server = new ApolloServer({ playground: true, typeDefs: gql`...`, resolvers: { ... }, }); server.listen({ port: 3000 }).then(({ url }) => { console.log(`Server running at ${url}`); }); export default () => {};
Finally, with our
typeDefs and
resolvers defined, we put our server to use. To do it, we take the
server variable we stored our Apollo Server in earlier and call it's
listen() method which returns a JavaScript Promise (hence the
.then() syntax is being chained on the end). Passed to
listen(), we provide an options object with a single property
port equal to
3000. This instructs Apollo Server to listen for inbound connections at
localhost:3000.
With this, we should have a functioning Apollo Server up and running. Of note, because we're overwriting the included
/api/graphql/server.js file in the Node.js boilerplate we started from, we've added an
export default () => {}, exporting an empty function to fulfill the expectations of the existing Express.js server (we'll learn how to connect the Apollo Server with this Express server later in the tutorial).
To give this a test, from the root of the boilerplate, run
npm run dev to start up the server. Fair warning, because we're starting two separate servers with this command (the Apollo Server we just implemented above and the existing Express server included in the boilerplate), you will see two statements logged telling you the server is running on different ports:
Terminal
Server running at Server running at
Before we move on to combining this new Apollo Server with the existing Express server in the boilerplate, let's look at how to set a custom context for resolvers.
Setting the resolver context
While we technically have a functioning GraphQL server right now (you can verify this by visiting
in your browser), it's good to be aware of how to set a custom resolver context as this plays into user authentication when using GraphQL as your main data layer.
/api/graphql/server.js
import { ApolloServer, gql } from "apollo-server"; const server = new ApolloServer({ playground: true, context: async ({ req, res }) => { const token = req?.cookies["jwt_token"]; const context = { req, res, user: {}, }; const user = token ? await authenticationMethod({ token }) : null; if (!user?.error) { context.user = user; } return context; }, typeDefs: gql`...`, resolvers: { ... }, }); server.listen({ port: 3000 }).then(({ url }) => { console.log(`Server running at ${url}`); }); export default () => {};
In GraphQL, whether you're performing a query or mutation, your resolver functions are passed a
context object as their final argument. This object contains the current "context" for the request being made to the GraphQL server. For example, if a user is logged in to your app and performs a GraphQL request, we may want to include the user's account information in the context to help us resolve the query or mutation (e.g., verifying that the logged in user has the proper permissions to access that query or mutation).
Here, alongside the
playground,
typeDefs, and
resolvers properties we added earlier, we've added
context set to a function. This function is automatically called by Apollo Server whenever a request comes into the server. It's passed an options object as an argument containing the server request
req and response
res objects (what Apollo Server uses internally to respond to the HTTP request made to the GraphQL server).
From that function, we want to return an object representing the
context argument that we want available in all of our resolvers. Above, we've come up with a hypothetical example where we anticipate an HTTP cookie being passed to the server (along with the GraphQL request) and using that to authenticate a user. Note: this is pseudo code and will not return a user in its current state.
To assign the user to the context object, we define a base
context object first, which contains the
req and
res from the options object passed to the context function via Apollo Server and combine that with an empty object representing our user. Next, we attempt to authenticate our user using the assumed
jwt_token cookie. Again, hypothetically, if this function existed, we would expect us to return a user object (e.g., containing an email address, username, and other user-identifying data).
Finally, from the
context: () => {} function, we return the
context object we defined (with the
req,
res, and
user) values.
/api/graphql/server.js
import * as apolloServer from "apollo-server"; const { ApolloServer, gql } = apolloServer.default; const server = new ApolloServer({ playground: true, context: async ({ req, res }) => { [...] return context; }, typeDefs: gql`...`, resolvers: { Query: { queryExample: (parent, args, context) => { console.log(context.user); return { message: "This is the message from the query resolver.", }; }, }, Mutation: { mutationExample: (parent, args, context) => { console.log(context.user); console.log("Perform mutation here before responding."); return { message: "This is the message from the mutation resolver.", }; }, }, }, }); server.listen({ port: 3000 }).then(({ url }) => { console.log(`Server running at ${url}`); });
Showcasing how to put the context to use, here, inside of our
queryExample and
mutationExample resolvers, we've logged out the
context.user value we set above.
Attaching the GraphQL server to an existing Express server
Up until this point we've been setting up our Apollo Server to be a standalone GraphQL server (meaning, we're not attaching it to an existing server). Though this works, it limits our server to only having a
/graphql endpoint. To get around this, we have the option of "attaching" our Apollo Server to an existing HTTP server.
What we're going to do now is paste back in the original source of the
/api/graphql/server.js file that we overwrote above with our standalone GraphQL server:
({ ...schema, introspection: isDevelopment, playground: isDevelopment, context: async ({ req, res }) => { const token = req?.cookies["app_login_token"]; const context = { req, res, user: {}, }; const user = token ? await loginWithToken({ token }) : null; if (!user?.error) { context.user = user; } return context; }, }); server.applyMiddleware({ cors: corsConfiguration, app, path: "/api/graphql", }); };
Some of this should look familiar. First, notice that instead of calling to
new ApolloServer() directly within the body of our
/api/graphql/server.js file, we've wrapped that call in a function expecting
app as an argument. Here,
app represents the existing Express.js server set up at
/index.js in the Node.js boilerplate we've been using throughout this tutorial.
Inside the function (notice that we're exporting this function as the default export for the file), we set up our Apollo Server just like we did above. Here, though, notice that
typeDefs and
resolvers are missing as properties. These are contained within the
schema value imported from the
./schema.js file in the same directory at
/api/graphql/schema.js.
The contents of this file are nearly identical to what we saw above. It's separated in the boilerplate for organizational purposes—this does not serve any technical purpose. To utilize that file, we use the JavaScript spread operator
schema value onto the object we're passing to
new ApolloServer()." As part of this unpacking, the
typeDefs and
resolvers properties on that imported object will be assigned back to the options we're passing to
new ApolloServer().
...to say "unpack the contents of the object contained in the imported
Just below this, we can also see a new property being added
introspection. This—along with the existing
playground property we saw earlier—is set to the value of
isDevelopment, a value that's imported via the
.app/environment.js file from the root of the project and tells us whether or not our
process.env.NODE_ENV value is equal to
development (meaning we're running this code in our development environment).
The
introspection property tells Apollo Server whether or not to allow GraphQL clients to "introspect" or discover the types, queries, mutations, etc. that the GraphQL server offers. While this is helpful for debugging and public APIs built with GraphQL, it's a security risk for private APIs built with GraphQL.
({ [...] }); server.applyMiddleware({ cors: corsConfiguration, app, path: "/api/graphql", }); };
With all of that set, finally, the part that plugs our Apollo Server into our existing Express.js server is the
server.applyMiddleware() method at the bottom of our exported function. This takes in three properties:
corswhich describes the CORS configuration and permissions for what domains are allowed to access the GraphQL server.
appwhich represents our existing Express.js server.
pathwhich describes at what URL in our existing Express.js server the GraphQL server will be accessible.
For the
cors property, we utilize the CORS middleware that's included with the Node.js boilerplate we're using (we'll look at this in detail in the next section). For the
path, we specify that our GraphQL server will be attached to our running server (started on port
5001 by running
npm run dev from the root of the project) at the path
/api/graphql. In other words, instead of the
path we saw earlier, now, we're "piggybacking" on the existing Express.js server and making our GraphQL server accessible on that server's port (5001) at
The end result is effectively the same—we get a running GraphQL server via Apollo Server—but we do not spin up another HTTP server on a new port.
Handling CORS issues when connecting via external clients
Finally, one last detail we need to cover is CORS configuration. Like we saw in the previous section, we're relying on the
cors middleware included in the Node.js boilerplate we've used throughout this tutorial. Let's open up that file in the boilerplate and explain how it impacts our GraphQL server:
/middleware/cors.js
import cors from "cors"; import settings from "../lib/settings"; const urlsAllowedToAccess = Object.entries(settings.urls || {}).map(([key, value]) => value) || []; export const configuration = { credentials: true, origin: function (origin, callback) { if (!origin || urlsAllowedToAccess.includes(origin)) { callback(null, true); } else { callback(new Error(`${origin} not permitted by CORS policy.`)); } }, }; export default (req, res, next) => { return cors(configuration)(req, res, next); };
This looks more threatening than it is. To cut to the chase, the end goal here is to tell the browser's CORS check (CORS stands for cross origin resource sharing and defines which URLs can access a server) whether or not the URL its request is being made from (e.g., an app we're running at
can access our GraphQL server.
settings-development.json
{ [...] "urls": { "api": " "app": " } }
That request's access is controlled via the
urls list included in the
settings-<env>.json file at the root of the project. That setting contains an array of URLs that are allowed to access the server. In this example, we want the same URLs allowed to access our existing Express.js server to access our GraphQL server.
Here,
is the server itself (meaning it can make requests back to itself, if necessary) and
is our front-end, customer-facing app (we use
localhost:5000 because that's the default port CheatCode's Next.js Boilerplate runs on).
Wrapping up
In this tutorial, we learned how to set up a GraphQL server using the
apollo-server package using two methods: defining a server as a standalone GraphQL server and attaching a GraphQL server to an existing HTTP server (in this case, an Express.js server).
We also learned how to set up a basic GraphQL schema and attach that to our server as well as how to define a custom context for our resolvers to handle things like authentication from within our GraphQL server.
Finally, we took a look at CORS configuration and made some sense of how to control access to our GraphQL server when attaching it to an existing server.
Get the latest free JavaScript and Node.js tutorials, course announcements, and updates from CheatCode in your inbox.
No spam. Just new tutorials, course announcements, and updates from CheatCode. | https://cheatcode.co/tutorials/how-to-set-up-a-graphql-server-with-apollo-server-and-express | CC-MAIN-2022-21 | refinedweb | 2,971 | 54.12 |
Yusaku Hashimoto? For example: I have some universal state in IO. We'll call it an IORef, but it could be anything, like reading lines from a file. And I have some method for accessing and updating that state. > next r = do n <- readIORef r > writeIORef r (n+1) > return n Now, if I use unsafeInterleaveIO: > main = do r <- newIORef 0 > x <- do a <- unsafeInterleaveIO (next r) > b <- unsafeInterleaveIO (next r) > return (a,b) > .... The arbitrariness is not "random" in the statistical sense, but rather is an oracle for determining the order in which evaluation has occurred. Consider, as an illustration these two alternatives for the ...: > fst x `seq` snd x `seq` return x vs > snd x `seq` fst x `seq` return x In this example, main will return (0,1) or (1,0) depending on which was chosen. You are right in that the issue lies in seq, but that's a red herring. Having made x, we can pass it along to any function, ignore the output of that function, and inspect x in order to know the order of strictness in that function.. This example is somewhat artificial because we set up x to use unsafeInterleaveIO in the bad way. For the intended use cases where it is indeed (arguably) safe, we would need to be sure to manually thread the state through the pure value (e.g. x) such that the final value is sane. For instance, in lazy I/O where we're constructing a list of lines/bytes/whatever, we need to ensure that any access to the Nth element of the list will first force the (N-1)th element, so that we ensure that the list comes out in the same order as if we forced all of them at construction time. For things like arbitrary symbol generation, unsafeInterleaveIO is perfectly fine because the order and identity of the symbols generated is irrelevant, but more importantly it is safe because the "IO" that's going on is not actually I/O. For arbitrary symbol generation, we could use unsafeInterleaveST instead, and that would be better because it accurately describes the effects. For any IO value which has real I/O effects, unsafeInterleaveIO is almost never correct because the ordering of effects on the real world (or whether the effects occur at all) depends entirely on the evaluation behavior of the program, which can vary by compiler, by compiler version, or even between different runs of the same compiled binary. -- Live well, ~wren | http://www.haskell.org/pipermail/haskell-cafe/2009-March/057802.html | CC-MAIN-2013-48 | refinedweb | 420 | 55.78 |
Can there be what other to saved methods, secure and fast in addition to saved procs. i understand only Hibernate. Can there be every other technologies like this?
Saved methods really are a spot to put code (SQL) which executes around the database, and so i comprehend the question to mean
"can there be every other method to package in the code which works on the database?"
You will find several solutions:
- There's little else that's quite just like a saved procedure, but you will find options that you consider.
- You can write all of your SQL as strings within your client code (java or whatever)
- It has various problems (lack of encapsulation, tight coupling -> harder maintenance), however, and it is not recommended.
- You could utilize an ORM for example NHibernate, which card inserts a layer involving the client logic and also the database. The ORM creates SQL to complete around the database. By having an ORM, it's harder to convey complex business logic compared to a saved procedure (sweeping generalisation!).
- A type of midway home is to define your personal data access layer (DAL) in java (or watever you are using) and it outside of the primary body of client code (separate classes / namespaces / etc.), to ensure that the consumer makes calls towards the DAL, and also the DAL translates these and transmits SQL towards the database, coming back the outcomes in the database to the customer.
Yes. you should use dynamic sql, however i personally like saved methods better.
1) If you are using MS SQL Server, it'll produce a query plan that ought to let the saved procedure to complete faster than simple dynamic sql.
2) It may be simpler an more efficient to repair a bug inside a saved procedure, expecially in case your application calls that procedure in a number of spots.
3) I've found it's nice to encapsulate database logic within the database instead of in embedded sql or application config file.
4) Creating saved procedure in to the database allows sql server to complete some syntax, and validation inspections at design time.
Hibernate is definitely an object/relational persistence service.
Saved procedure is really a subroutine in the relational database system.
Different factor.
If you would like option to Hibernate, you should check for iBatis for Spring
That you can do dynamic SQL as secure and fast as saved methods could be, you just need some work. Obviously, it requires some try to make saved methods secure and fast also.
A saved procedure is really a subroutine open to programs being able to access a relational database system. Saved methods (sometimes known as a proc, sproc, StoPro, or SP) are really saved within the database data dictionary.
Typical ways to use saved methods include data validation (integrated in to the database) or access control systems. In addition, saved methods are utilized to consolidate and centralize logic which was initially implemented in programs. Large or complex processing that may require the execution of countless SQL claims is moved into saved methods, and all sorts of programs call the methods only.
Saved methods act like user-defined functions (UDFs). The major difference is the fact that UDFs could be used like every other expression within SQL claims, whereas saved methods should be invoked while using CALL statement
I want to see this short article and reframe your question. Hibernate is not related to saved procs.
@Jeremy I have no idea what idiot chosen you lower. +1 from me
@Poster. Parameterized SQL (it may be Dynamic although not always) will execute with similar safety and efficiency as saved Procs. | http://codeblow.com/questions/can-there-be-what-other-to-saved-methods/ | CC-MAIN-2019-51 | refinedweb | 609 | 51.78 |
Talk:XDG Base Directory)
Emacs should be XDG-aware in the first 27.* release (see this commit). Not sure if this warrants updating the table yet. Odnir (talk) 22:00, 28 August 2019 )
- Would it be advised to use symbolic links to manually split such offending programs into their corresponding places? There are a lot of offending packages that store their cache inside the
$XDG_CONFIG_HOMEinstead of in
$XDG_CACHE_HOME. I noticed a pattern, though. Most of the offenders use Electron, or are Chromium-based browsers... could they be nudged to automatically migrate the data to the corresponding places? Rolandog (talk) 12:58, 10 February 2022 (UTC)
- To answer your first question, for any use cases I can think of, I wouldn't advise it. For the case of having all of your
~/.configin VCS, replacing the offending configurations with symlinks provides no clear advantage over adding a line to a
.gitignore. In fact, this increases the overhead needed to setup a new system, and some might argue that the complexity added to your hierarchy by having the same files in two locations is also undesirable.
- As for the Electron programs, this is an even more egregious abuse of
$XDG_CACHE_HOMEthan Alad was referring to. This section was, as I understand it, mainly targeting Qt programs who put human-unreadable, non-configuration state in configuration files. Nevertheless, the same principles apply. Add the offending programs to a
.gitignore. Though most Electron programs indeed share a common structure, I feel like
.gitignoreis too primitive to do anything very useful with this information. I think you still be best off blocking entire Electron program config directories manually. Cheers, CodingKoopa (talk) 04:42, 2 March 2022 )
- Should it not be XDG_STATE_HOME ($HOME/.local/state) now? Baerbeisser (talk) 07:29, 6 August 2021 )
Add description of support categories
Hi, do you think we could add a description of what each categories (Supported, Partial, Hardcoded) means? For example, I don't know where to put things like the following:
- app that use a hardcoded
~/.config, not using the XDG variables
- app that use the
XDG_CONFIG_HOMEfor everything
- app where we can use alternative methods (environment variable, option, ...) for specific files
- app where we can use alternative methods, freeing
$HOME, but not using the correct path for each file (config vs cache vs data) (for example, GnuPG)
Maybe we could make those categories evolve, either by refactoring the categories (creating new ones), and/or putting the categories directly in the table (still allowing seeing programs per category)?
Apollo22 (talk) 13:58, 14 March 2019 (UTC)
- #Partial meaning is implied in the XDG Base Directory#Support section:
-.
- If you can't change directories without modifying the code, it goes to #Hardcoded. Maybe it should not be #Partial as it actually does not support the spec but #Workaround available.
- I also propose using fields to describe how different programs work with the spec, instead of just having 3 categories:
- It's hard to say if everything in those new fields would be useful. I don't think it should matter if the application prefers its own environment variables or legacy config paths, or if it migrates old configs to xdg dirs. I think it's good enough as long as you can run the latest version for the first time and it follows the spec.
- I definitively support getting rid of the misleading word "Partial". The fact that there are workarounds using per-application environment variables does not change the fact that most of that software does not support the spec at all.
- -- nl6720 (talk) 12:09, 23 April 2020 (UTC)
- I question the value add for all the extra fields. Especially as compliance in programs tends to be more nuanced than this allows. For example, many programs make use of
XDG_CONFIG_HOMEbut debatably fail to correctly follow the standard, mixing state with config, etc. The reality is many developers have different interpretations of the standard which would require far more binary options to capture. I propose a simpler table that doesn't try to over-define abstract values for implementations, and just captures the implementation detail.
- Comic-paralyze-image (talk) 15:23, 11 November 2021 (UTC)
- More columns are better for people…
- maintaining this page
- looking to move projects towards supporting XDG more
- selecting software based on XDG-support
- Fewer columns are better for people…
- trying to get their particular piece of software to work better with XDG
- That's my current impression.
- I've spent more time as part of the more-columns-group, than I have as part of the fewer-columns-group.
- I suspect that much more time is spent by the latter group.
- The table should imo still stay readable to people who are seeing it for the 1st time (Cpt. Obvious).
- But it would be nice to be able to express and sort by more granularity.
- I disagree that more columns automatically make it overwhelming; Wikipedia has a lot of these tables. They've probably put a lot of thought into them, and they're still using them. Cyethiod (talk) 13:55, 12 November 2021 (UTC)
- Would this serve as a sufficient counter-example? 23 columns (36 rows) - Comparison of virtual reality headsets #Tethered. Cyethiod (talk) 16:12, 12 November 2021 (UTC)
- That's not nearly as large, since there are over 400 rows in 4 tables on this page. Also none of those 23 columns contains arbitrary-width content like the current "Notes" column here. None of the proposals above removed that column, and one even added another column where arbitrary notes might be added. — Lahwaacz (talk) 16:37, 12 November 2021 (UTC)
- For every user on Wikipedia that wants larger, more verbose tables, there are a dozen users that will cite rules about Manual of Style violations, original research, etc. I would not hold Wikipedia articles as a shining example of _the correct way to do something_. If you really want a reference from Wikipedia, I would find a MOS page instead.
- In general I am a more-columns person, but having made some effort myself to implement overrides I really think we would need more than six columns to capture **everything** meaningful.
- If we wanted more columns, I would suggest following the standard more closely:
- The biggest problem with an extended table like this, is the additional onus on writers to confirm AND AGREE that an implementation is correct. Agreeing on the standard doesn't appear to be easy.
- --Comic-paralyze-image (talk) 18:53, 12 November 2021 (UTC)
Should Organizations/DEs be mentioned?
Kind of obvious that the major DEs follow the spec since Freedesktop.org is mainly run by those groups, but I think that that's not obvious from this article. I think it's important to mention how KDE, GNOME, Qt, GTK, Debian, Ubuntu, Red Hat, etc. are following the spec. Since they're not single applications they don't really fit in any of the main sections, but I think they definitely deserve the mention.
Hobyamnlyzfsr (talk) 21:35, 9 September 2019 (UTC)
- It's not obvious, even freedesktop.org does fully not follow its own standards: e.g. Cursor themes#XDG specification does not use the XDG base directory standard... -- Lahwaacz (talk) 07:54, 23 April 2020 (UTC)
MPV removing XDG support
It looks like MPV will need to be removed from this as the author decided it's "stupid" - see
Not sure if this will make it into a release branch though as a few days ago he also made it crash on boot at Gnome because gnome was also "broken" or something and then reverted it (or if Arch devs will revert this pointless commit for the Arch package).
—This unsigned comment is by Vash63 (talk) 11:45, 9 July 2020. Please sign your posts with ~~~~!
- For the record, this change never made it to a release version. It was reverted 2020-10-15.[5] Comic-paralyze-image (talk) 22:09, 29 December 2020 (UTC)
Android
I updated the android paths, but these might also be valid alternatives:
$ export ANDROID_EMULATOR_HOME="$ANDROID_PREFS_ROOT"/emulator $ export ANDROID_AVD_HOME="$XDG_DATA_HOME"/android/avd
In the end I decided it would make more sense to have emulators and their configuration together in XDG_DATA_HOME, as the configuration is probably useless on its own.
—This unsigned comment is by Xerus (talk) 10:55, 23 December 2020 (UTC). Please sign your posts with ~~~~!
Tried the android configuration suggested, as well as `ADB_VENDOR_KEYS` (as suggested by `adb --help`). None of these had any effect.
Gesh (talk) 19:46, 3 February 2021 (UTC)
Vim/Neovim
Hi,
somehow the suggested way to set up the
VIMINIT environment variable in case you want to use separate configs for Vim and Neovim
export VIMINIT='if !has('nvim') | source "$XDG_CONFIG_HOME/vim/vimrc" | endif'
does not work for me. I had to change it to:
export VIMINIT='if !has("nvim") | let $MYVIMRC="$XDG_CONFIG_HOME/vim/vimrc" | else | let $MYVIMRC="$XDG_CONFIG_HOME/nvim/init.vim" | endif | source $MYVIMRC'
Besides some minor fixes to the first line, that were necessary to make the first line work in Vim (e.g. double quotes around
nvim), with the first line Neovim didn't load my
~/.config/nvim/init.vim. The second line fixes this issue.
I'm using Vim version 8.2.1989 and Neovim version 0.4.4.
I didn't want to change the article right away, because maybe I just did something else wrong. Does someone has the same issue with the suggested setup?
Schuam (talk) 07:41, 29 December 2020 (UTC)
Firefox
Mozilla Firefox puts the default profile in
~/.mozilla/. There is a workaround for "moving" it out of the home directory, though — Firefox has the command line option
--profile (I can't actually find documentation on this online, and there's no Firefox man page, but it's in the
firefox --help output), which lets you specify a different profile directory to use than the one in the home directory. If you write a shell script wrapper for
firefox to use a profile from, say,
$XDG_DATA_HOME/mozilla/firefox, then
~/.mozilla/ basically becomes useless.
Note that even when specifying a different profile directory,
~/.mozilla/ still gets created regardless. Though, this directory can be removed safely without affecting the new profile, so in the same wrapper script you could automatically remove the
~/.mozilla/ directory immediately after its creation.
Should this be included in Firefox's entry under Notes? I think it's at least worth mentioning for people that really want to move Firefox out of the home directory.
Inco (talk) 20:52, 20 April 2021 (UTC)
- It should be added to the Firefox page and then linked from the notes column. -- Lahwaacz (talk) 20:58, 20 April 2021 (UTC)
- The
~/.mozilladirectory gets recreated during the runtime of Firefox. I wrote a wrapper which solves this issue: MozXDG -- Jorengarenar (talk) 09:05, 21 April 2021 (UTC)
Obscure software
Was wondering how much sense it would make to add obscure/rare software to the list, such as games. I was specifically thinking about MapTool which places files in the home directory but can be configured not to. Maze (talk) 03:04, 24 June 2021 (UTC)
- 80% of this page is already for obscure software, so I don't see how it would make a difference. -- MrX (talk) 11:47, 24 June 2021 (UTC)
Where to put workarounds?
Some are statically set Variables, some are dynamically obtained (scripted), some are aliases, some can be set in shell, some need to be set before graphical login. So where to set them per user, where globally? I think that should be mentioned in the wiki.
Per user i have them in ~/.bashrc and ~/.bash_aliases (respective XDG_CONFIG_HOME/bash/..., must be set before shell launched) and in ~/.xinitrc and ~/.xprofile and/or ~/.xsession (respective XDG_CONFIG_HOME/session/...).
But the global settings? Should they go to /etc/X11/xinit/xinitrc.d/? And things like bash's HISTFILE to /etc/profile.d and .../Xsession edited to load /etc/profile?
Baerbeisser (talk) 07:23, 6 August 2021 (UTC) | https://wiki.archlinux.org/title/Talk:XDG_Base_Directory | CC-MAIN-2022-27 | refinedweb | 2,002 | 63.19 |
Algorithm to convert Binary Search Tree into Balanced Binary Search Tree
Reading time: 30 minutes | Coding time: 10 minutes
In this article, we will explore an algorithm to convert a Binary Search Tree (BST) into a Balanced Binary Search Tree. In a balanced BST, the height of the tree is log N where N is the number of elements in the tree. In the worst case and in an unbalanced BST, the height of the tree can be upto N which makes it same as a linked list. The height depends upon the order of insertion of elements while some other trees like AVL tree has routines to keep their tree balanced which is not present in a normal Binary Search Tree. It is important to keep a BST balanced, as it will give best performance for tasks it is build for like:
- searching elements in O(log N)
The conversion to a Balanced Binary Search Tree takes O(N) time complexity
Example:
Input of an unbalanced Binary Search Tree:
Output of the same tree but as a balanced Binary Search Tree:
As we know the property of binary search tree, inorder traversal of binary search tree gives element in sorted order which are stored in binary search tree.And then we can form the balanced binary search from the sorted array.
Algorithm:
- Traverse given BST in inorder and store result in an array. This step takes O(n) time. Note that this array would be sorted as inorder traversal of BST always produces sorted sequence.
-
Implementation
Following is the implementation of the above algorithm.
#include <bits/stdc++.h> using namespace std; struct node { int key; struct node *left, *right; }; // A utility function to create a new BST node struct node *newNode(int item) { struct node *temp = (struct node *)malloc(sizeof(struct node)); temp->key = item; temp->left = temp->right = NULL; return temp; } /* A utility function to insert a new node with given key in BST */ struct node* insert(struct node* node, int key) { /* If the tree is empty, return a new node */ if (node == NULL) return newNode(key); /* Otherwise, recur down the tree */ if (key < node->key) node->left = insert(node->left, key); else if (key > node->key) node->right = insert(node->right, key); /* return the (unchanged) node pointer */ return node; } /* This function traverse the skewed binary tree and stores its nodes pointers in vector nodes[] */ void storeBSTNodes(struct node* root, vector<struct node*> &nodes) { // Base case if (root==NULL) return; // Store nodes in Inorder (which is sorted // order for BST) storeBSTNodes(root->left, nodes); nodes.push_back(root); storeBSTNodes(root->right, nodes); } /* Recursive function to construct binary tree */ struct node* buildTreeUtil(vector<struct node*> &nodes, int start, int end) { // base case if (start > end) return NULL; /* Get the middle element and make it root */ int mid = (start + end)/2; struct node *root = nodes[mid]; /* Using index in Inorder traversal, construct left and right subtress */ root->left = buildTreeUtil(nodes, start, mid-1); root->right = buildTreeUtil(nodes, mid+1, end); return root; } // This functions converts an unbalanced BST to // a balanced BST struct node* buildTree(struct node* root) { // Store nodes of given BST in sorted order vector<struct node*> nodes; storeBSTNodes(root, nodes); // Constucts BST from nodes[] int n = nodes.size(); return buildTreeUtil(nodes, 0, n-1); } /* Function to do preorder traversal of tree */ void preOrder(struct node* node) { if (node == NULL) return; cout<<node->key<<" "; preOrder(node->left); preOrder(node->right); } // Driver Program to test above functions int main() { struct node *root = NULL; root = insert(root, 1); insert(root, 2); insert(root, 3); insert(root, 4); insert(root, 5); root = buildTree(root); cout<<"Pre order Traversal of tree"<<endl; preOrder(root); return 0; }
Output:
Pre order Traversal of tree 3 1 2 4 5
Explanation:
First of all we will do inorder traversal and and store the elements in array.
- First go to the left of the root but it is null therefore go to the root of the tree and store it in an array.
- Then go to the right of the root go to the 2.left check if left child of the 2 is null the store 2 in the array.
- Then go to the right of the 2 and check if the left child of 3 is null the store the 3 in array.
- Then go to the right of 3 and check if the left child of 4 is null then store 5 in the array
- Then go to the right of 4 and check if the left child of 5 is null then store 5 in array. Now check if the right child of 5 is null then return the array.
- Now we will build the balanced binary search tree from the sorted array we obtained through the above process.
- First of all find the the middle of the array i.e. 3 and store it as root of the new tree.
- Then go to the left of the 3 and build the left subtree for that find again the middle of the left sub array of 3 i.e. 2 and store as the left child of 3.
- Then go the left sub array of the 2 and again find the middle of the array and store it as the left child of 2.
- Now start > end therefore go to root of the tree i.e. 3.
- Now as we have constructed left sub tree in similar way now we will construct right sub tree go to the right sub array and again find the middle of the array i.e. 4 and store it as the right child of 3.
- Now go the right sub array of 4 and again find the middle i.e. 5 and store it as the right child of the 4.
- Now start>end return to root i.e. 3 of the tree.
- Now our Balanced Binary Search Tree is ready.
Time Complexity:
The Inorder Traversal of Binary search tree in O(n) time complexity.
To form Balanced Binary tree from Sorted array , it takes O(n) time to complete.
Following is the recurrence relation for buildTreeUtil().
T(n) = 2T(n/2) + C T(n) --> Time taken for an array of size n C --> Constant (Finding middle of array linking root to left and right subtrees take constant time)
Therefore, in total this algorithm takes O(N) time to complete. | https://iq.opengenus.org/algorithm-to-convert-binary-search-tree-into-balanced-binary-search-tree/ | CC-MAIN-2021-04 | refinedweb | 1,063 | 61.4 |
Context: found in the folder. How can you do these tasks easily?
You can use the FileUtils class from the Apache Commons library for such a task. The ‘listFiles’ method from this class will return a Collection of all the files found inside a specified baseFolder. Specifically, a Collection of java.io.File items is returned.
Once you have this, you can perform all actions on those files that you need in your tests, like counting how many they are, reading their content, updating their content, deleting them, moving them and so on.
The required import
In order to use this method in your tests, you will need to do an import as follows:
import org.apache.commons.io.FileUtils;
You also need to have the dependency to the Apache Commons IO library set up in your project. You can find its latest version in the Maven Central Repository.
Method signature
The signature of this method looks like this:
listFiles( final File directory, final String[] extensions, final boolean recursive)
- the first parameter, the ‘directory‘ will specify inside which folder on the computer to search for the desired files
- the second parameter, the ‘extensions‘ can be either null or can have a value. It its’ value is null, all files will be considered, no matter their extension. If you want to get files with only one or several extensions, you can specify them in this parameter which is an array of Strings.
- the third parameter, the ‘recursive‘ boolean one, specifies whether you want to look recursively in all the folders found inside the ‘directory’, which is the first parameter.
The result of calling this method can be stored in a variable of type Collection.
Usage examples
- searching for all files inside a baseFolder, no matter the extension, and in all sub-folders of the baseFolder
Collection files = FileUtils.listFiles(new File(baseFolder), null, true);
- searching for all .png, .jpg and .bmp files inside a baseFolder, but only in the baseFolder and not in its’ sub-folders
Collection files = FileUtils.listFiles(new File(baseFolder), new String[]{"png", "jpg", "bmp"}, false);
Counting how many results were returned
If, in your test, you need to check how many files were found based on your search criteria, you can do that easily by calling the ‘size()’ method on the variable to which you stored the results of the ‘listFiles()’ method.
files.size()
Iterating over the result
Iterating over the Collection of files can be useful when you want to do some changes on one or more of them. For example, maybe you want to do some changes on all files whose name contains a specified value. Such a task can be accomplished as follows:
for (Iterator iterator = files.iterator(); iterator.hasNext(); ) { File file = (File) iterator.next(); if (file.getName().contains("someDesiredValue")) { //here is the code for updating the file }}
One thought on “Find files inside a folder in your automated test” | https://imalittletester.com/2018/11/08/find-files-inside-a-folder-in-your-automated-test/ | CC-MAIN-2019-04 | refinedweb | 486 | 61.06 |
JavaScript Libraries are Not Your Front-End Architecture
This is not going to be a post about architecture but more about MHO. Lately, I’m asked to evaluate the architecture of small to big front-end solutions. There is big buzz around front-end development and it drives a lot of companies to build client-side solutions without considering how to build them. One of the misunderstandings I’m facing a lot is calling the existence of JavaScript libraries in the solution a solution architecture. Sorry to say that but
JavaScript libraries are not front-end architecture
Every JavaScript library deals with one or more aspects of your code like namespaces/modules/packages in your server-side app. Whether it is jQuery for DOM interaction, Backbone/Angular/Ember/Knockout for MV* and separation of concerns, require.js for AMD or any other JavaScript library, all those libraries solves some common problems.
Take a moment and ask yourself these question regarding your JavaScript apps:
- Can your app scale?
- If you remove one of the modules/features in your webpage is it going to break?
- Do you have reusable components/features in your solution?
- Are your app parts tightly coupled?
- Are your modules/features testable?
- What is happening when a module/feature is in error state?
- And more
If you answered those questions that might imply that there is a front-end architecture in your solution. If you can’t answer those questions then you might have problems in your solution.
Building big and scalable JavaScript apps is a difficult task and shouldn’t be undertaken lightly. Using JavaScript libraries helps to make the task less complicated but it doesn’t imply that you have an architecture. I would like to hear your opinion about the subject. | http://blogs.microsoft.co.il/gilf/2013/04/24/javascript-libraries-are-not-your-front-end-architecture/ | CC-MAIN-2017-09 | refinedweb | 295 | 64.51 |
, Mike again. Here is the scenario: you’re sitting in front of a workstation that has been diagnosed with a Group Policy problem. You scurry to a command prompt and type the ever familiar GPRESULT.EXE and redirect the output to a text file. Then, proceed to open the file in your favorite text editor and then start scrolling through text to start your adventure in troubleshooting Group Policy. But, what if you could get an RSOP report like the one from the Group Policy Management Console (GPMC)—HTML based with sorted headings and the works? Well, you can!
Let’s face it—the output for GPRESULT.EXE is not aesthetically pleasing to the eye. However, Windows Server 2008 and Windows Vista SP1 change this by including a new version of GPRESULT that allow you to have a nice pretty HTML output of Group Policy results, just like the one created when using GPMC reporting.
Your new GPRESULT command is GPRESULT /H rsop.html. Running this command creates an .html file in the current directory that contains Group Policy results for the currently logged on user and computer. You can also add the /F argument to force Group Policy Results to overwrite the file name, should the file exist from a previous instance of GPRESULT. Also, if you or someone who signs your paycheck loves reporting and data mining, then GPRESULT has another option you’ll enjoy: change the /H argument to a /X and GPRESULT will provide Group Policy Results in .xml format (yes change the file extension to .XML too). You can then take this output (conceivably from many workstations) and store it in SQL and voila—reporting heaven.
Figure 1-HTML output from GPRESULT
Figure 2- XML output from GPRESULT
All you text-based report lovers can relax because the new version still defaults to text-based reporting.
I know I know… what about Windows Server 2003 and Windows XP? No worries, we can accomplish the same task, from the command line. We can use VBScript and the GPMC object model to provide a similar experience for those still using Windows Server 2003 or Windows XP. Both Windows Server 2003 and Windows XP are able to launch VBScripts. However, GPMC is a separate download for Windows Server 2003 and Windows XP (). GPMC is a feature included in Windows Server 2008 that you can install through Server Manager.
Here is the code for the script. Copy and paste this code into a text file. Be sure to save the text file with a .vbs extension or it will not run correctly.
‘===================================================================== ’ ’ VBScript Source File ’ ’ NAME: ’ ’ AUTHOR: Mike Stephens , Microsoft Corporation ’ DATE : 11/15/2007 ’ ’ COMMENT: ’ ’=====================================================================
Set oGpm = CreateObject(“GPMGMT.GPM”) Set oGpConst = oGpm.GetConstants()
Set oRSOP = oGpm.GetRSOP( oGpConst.RSOPModeLogging, “” , 0) strpath = Left(Wscript.ScriptFullName, InStrRev(Wscript.ScriptFullName,”\”, -1, vbTextCompare) )
oRSOP.LoggingFlags = 0
oRSOP.CreateQueryResults() Set oResult = oRSOP.GenerateReportToFile( oGpConst.ReportHTML, strPath & “rsop.html”) oRSOP.ReleaseQueryResults()
WScript.Echo “Complete”
WScript.Quit()
Figure 3- VBScript code to save Group Policy results to an HTML file
This code shown in figure 3 does not require any modification to work in your environment. Its only requirement is the computer from which the script runs must have GPMC installed. Now, let’s take a closer look at the script, which is a good introduction to GPMC scripting.( Please note that this posting is provided "AS IS" with no warranties, and confers no rights. Use of included script sample is subject to the terms specified at.)
This line is responsible for making the GPMC object model available to the VBScript. If you are going to use the functions and features of GPMC through scripting, then you must include this line in your script. Also, if your script reports and error on this line, then it is a good indication that you do not have GPMC installed on the computer from which you are running the script.
The GPMC object model has an object that contains constants. Constants are nothing more than keywords that typical describe an option that you can use when calling one or more functions. You’ll see in Line 3 and Line 7 where we use the constant object to choose the RSOP mode and the format of the output file.
The RSOP WMI provider makes Group Policy results possible. Each client-side extension records their policy specific information using RSOP as it applies policy. GPMC and GPRESULT then query RSOP and present the recorded data as the results of Group Policy processing. RSOP has two processing mode, Logging mode and Planning mode. Planning mode is allows you to model “what if” scenarios with Group Policy and is commonly surfaced in Group Policy Modeling node in GPMC. Logging mode reports the captured results from the last application of Group Policy processing. You can see the first parameter passed to GetRSOP is a constant RSOPModeLogging. This constant directs the GetRSoP method to retrieve logging data and not planning data, which is stored in a different section within RSOP. The remaining parameters are the default values for the GetRSOP method. This function returns an RSOP object, from which we can save RSOP data to a file.
This line simply gets the name of the folder from where the script is running and saves it into the variable strpath. This variable is used in line 7; when we save the report to the file system.
LoggingFlags is a property of the RSOP object. Typically, you use this property to exclude user or computer from the reporting results. Most of the time and for this example, you want to set LoggingFlags equal to zero (0). This is a perfect opportunity to use a constant (created in line 2). However, some of the values are not included in the constant object and LoggingFlags happens to be one of them. If you want to exclude computer results from the report data, then set LoggingFlags equal to 4096. If you want to exclude user results from the report data, then set LoggingFlags equal to 8192.
The CreateQueryResults method actually copies the RSOP data logged from the last processing of Group Policy into a temporary RSOP WMI namespace. This makes the data available for us to save as a report.
The script retrieved RSOP information in line six. In this line, we save the retrieved RSOP information into a file. The first parameter in the GenerateReprotToFile method is a value that represents the report format used by the method. This value is available from the constant object—ReportHTML. The second parameter is the path and filename of the file to which the method saves the data—rsop.html. Later, I’ll show you how you can change this line to save the report to XML. Remember, the script creates the RSOP.HTML file in the same folder from where you started the script.
The ReleaseQueryResults method clears the temporary RSOP namespace that was populated with the CreateQueryResults method. Group Policy stores actual RSOP in a different WMI namespace. CreateQueryResults copies this data into a temporary namespace. This is done to prevent a user from reading RSOP data while Group Policy is refreshing the data. You should always call the ReleaseQueryResults method when you are done using the RSOP data. The remainder of the script is self explanatory.
I mentioned earlier that you could also save the same data in XML as oppose to HTML. This is a simple modification to line seven.
Set oResult = oRSOP.GenerateReportToFile( oGpConst.ReportXML, strPath & “rsop.xml”)
Saving the report in XML is easy. Change the first argument to use the ReportXML constant and the file name (most importantly—the file extension) to reflect the proper file format type.
Group Policy Resultant Set of Policy (RSoP) data is critical information when you believe you are experiencing a Group Policy problem. Text formats provide you most of the information you need but, at the expense of you manually parsing through the data. HTML formats have the same portability as text formats and provide you a better experience for navigating directly to the information for which you are looking. Also, they look much better than text—so they are good for reports and presentation. Lastly, the XML format is awesome for finding things programmatically. You can also store this same information in a SQL database (for multiple clients) and run custom SQL queries to analyze Group Policy processing across multiple clients.
- Mike Stephens
You gave me no other choice Ned. I am sorry to have to use comments, but hopefully you get this. Drop me a line if you remember the youngin' Comprox: comprox [at) gmail dottt com. Sorry for the spam :)
For those of you gearing up for a new year of administering Group Policy, here's some links to articles
For LoggingFlags the values didn't work for me. i.e.
If you want to exclude computer results from the report data, then set LoggingFlags equal to 4096. If you want to exclude user results from the report data, then set LoggingFlags equal to 8192.
The values should be
const long RSOP_NO_COMPUTER = 0x10000;
const long RSOP_NO_USER = 0x20000;
i.e
RSOP_NO_COMPUTER = 65536 <- tried this fine
RSOP_NO_USER = 131072 <-haven't tried this
I should have said I was trying this on XP - apologies if there is a difference. | http://blogs.technet.com/b/askds/archive/2007/12/04/an-old-new-way-to-get-group-policy-results.aspx | crawl-003 | refinedweb | 1,552 | 65.01 |
OPEN(2) BSD Programmer's Manual OPEN(2)
open - open or create a file for reading or writing
#include <fcntl.h> int open(const char *path, int flags, mode_t mode);
The file name specified by path is opened for reading and/or writing as specified by the argument flags and the file descriptor returned to the calling process. The flags argument may indicate the file is to be creat- ed if it does not exist (by specifying the O_CREAT flag), in which case create and file exists. O_SYNC Perform synchronous I/O operations. O_SHLOCK Atomically obtain a shared lock. O_EXLOCK Atomically obtain an exclusive lock. O_NOFOLLOW If last path element is a symlink, don't follow it. Opening a file with O_APPEND set causes each write on the file to be ap- pended to the end. If O_TRUNC and a writing mode are specified and the file exists, the file is truncated to zero length. If O_EXCL is set with O_CREAT and the file already exists, open() returns an error. This may be used to implement a simple exclusive access locking mechanism. If either of O_EXCL or O_NOFOLLOW making all subsequent I/O on the open file non-blocking. If the O_SYNC flag is set, all I/O operations on the file will be done syn- chronously. A FIFO should either be opened with O_RDONLY or with O_WRONLY. The behavior for opening a FIFO with O_RDWR is undefined.. getdtablesize(3) returns the current system limit.
If successful, open() returns a non-negative integer, termed a file descriptor. Otherwise, a value of -1 is returned and errno is set to in- dicate the error.
The named file is opened unless: [ENOTDIR] A component of the path prefix is not a directory. [ENAMETOOLONG] A component of a pathname exceeded {NAME_MAX} characters, or an entire path name exceeded {PATH_MAX} translating the pathname, or the O_NOFOLLOW flag was specified and the tar- get is a symbolic link. [EISDIR] The named file is a directory, and the arguments specify it is to be opened for writing. [EINVAL] The flags specified for opening the file are not valid. [EROFS] The named file resides on a read-only file system, and the file is to be modified. [EMFILE] The process has already reached its limit for open file descriptors. [ENFILE] The system file table is full. [ENXIO] The named file is a character special or block special file, and the device associated with this special file does not exist. [ENXIO] The named file is a FIFO, the O_NONBLOCK and O_WRONLY flags are set, and no process has the file open for reading. [EINTR] The open() operation was interrupted by a signal. [EOPNOTSUPP] O_SHLOCK or O_EXLOCK is specified but the underlying filesystem does not support locking. [EWOULDBLOCK] O_NONBLOCK and one of O_SHLOCK or O_EXLOCK is specified and the file is already).
chmod(2), close(2), dup(2), flock(2), lseek(2), read(2), umask(2), write(2), getdtablesize(3)
The open() function conforms to IEEE Std 1003.1-1990 ("POSIX") and X/Open Portability Guide Issue 4.2 ("XPG4.2"). POSIX specifies three different flavors for synchronous I/O: O_SYNC, O_DSYNC, and O_RSYNC. In OpenBSD, these are all equivalent. The O_SHLOCK, O_EXLOCK, and O_NOFOLLOW flags are non-standard extensions and should not be used if portability is of concern.
An open() function call appeared in Version 2 AT&T UNIX.
The O_TRUNC flag requires that one of O_RDWR or O_WRONLY also be speci- fied, else EINVAL is returned. MirOS BSD #10-current November 16, 1993. | http://www.mirbsd.org/htman/sparc/man2/open.htm | CC-MAIN-2014-10 | refinedweb | 587 | 65.12 |
Post your Comment
Writing Quick Articles Using Information in Public Domain
Writing Quick Articles Using Information in Public Domain... their work submit their articles and work into the public domain so... of using the material available in public domain. Some of them are:
It saves
Articles - Articles Submission Guidelines
;
Writing Quick Articles Using Information in Public Domain
Books, articles... and related information about articles.
Writing Great Articles...
information required to satisfy reader's curiosity but articles must always
Articles
of the web site
reads the articles to get more information about the particulars topic... of the
grammatical articles the and a for non-native speakers.
When using English, the can...
Articles
Domain Name
the business are looking for online website.
Using your domain and a website...The domain name system (DNS) is a series of letters and numbers... on the Internet. The domain name is the way that Internet names are located
Submit Articles for Free
articles now come complete with an RSS feed. Using this technology we distribute... information. We provide well-written features, "how-to" articles; "how I did... Directory provides a quick, easy way for you to post your articles. You don't
Mistakes To Be Avoided While Writing Articles
Mistakes To Be Avoided While Writing Articles
The basic strategy involved in writing articles which can be presented for reprints through ezine publishers as well as other
Submit Articles Free - Free Resources For Articles Submission
Submit Articles Free - Free Resources For Articles Submission... for articles
submission. Articles submission to the free articles resources will
help you increase popularity to your articles. You will also
quick sort
quick sort sir, i try to modify this one as u sugess me in previous... solve this one also.
import java.util.*;
public class QuickSort1
{
public static void quick_srt(int array[],int low, int n){
int lo = low;
int hi
Quick Hibernate Annotation Tutorial
information in Hibernate. The Java 5 (Tiger) version has
introduced a powerful way...;
<!DOCTYPE hibernate-configuration PUBLIC
"-//Hibernate/Hibernate... it is net.roseindia.Employee .
Note:- Using annotations does not mean that you cannot give
Tomcat Quick Start Guide
Tomcat Quick Start Guide
This tutorial is a quick reference of starting development application using JSP, Servlets and JDBC technologies. In this quick and very
Struts Articles
Struts Articles
Building on Struts for Java... using annotations
data conversion and binding using annotations.... It will detail a set of best practices using the included security mechanisms
Domain Registration Guide
domain using
the control panel provided by domain registrar.
Step 4: Use ftp... Domain Name Registrations
This is discussing about Domain Names and tell you how you can register a
domain for creating your website.
Domain Name
Waiting for ur quick response
Waiting for ur quick response Hi,
I have two different java... A
{
public static void main(String args[])
{
System.out.println("Hai");
}
}
Then i... A
{
public static void main(String args[])
{
System.out.println("Hello
Technology Articles/FAQs
Technology Articles/FAQs
What is Technology?
The Oxford... or in
context with particular fields like information technology, Industrial... being and
information technology is spearheading this change. The age
VoIP Information
VoIP Information
VOIP Information
When you are looking for the most comprehensive VOIP Information available, you will quickly find that you have... the vast amount of information on your own could be imposing. Your other option
Tips for Increasing Money Making Abilities of Your Articles
Writing articles for promoting... your online business's profitability. Firstly, writing articles puts you... the internet businesses. Hence, writing articles must be considered by them for improving
public static void main
information, visit the following link:
Understanding public static void main...public static void main what is mean public static void main?
public-It indicates that the main() method can be called by any object
Quick Sort In Java
to sort integer values of an array using quick
sort.
Quick sort algorithm is developed by C. A. R. Hoare. Quick sort is a comparison sort.
The working...
Quick Sort in Java
Miracles Happen With SEO Articles - SEO Article by Expert SEO Company India
with writing SEO
articles. Ofcourse it is important to incorporate keywords....
Three prime objectives associated with writing successful
SEO articles... in your mind while outlining your articles.
Some people prefer using a chief keyword
Please help need quick!!! Thanks
Please help need quick!!! Thanks hey i keep getting stupid compile... simulation program of sorts here is my code:
RentforthBoatRace.java
public abstract class RentforthBoatRace {
public abstract double getSum
Some of Facts About Articles
Some of Facts About Articles
People Are Making A Lot Of Wealth With
Articles
... Popularity With Articles
You Can Also Achieve All
Struts Quick Start
Struts Quick Start
Struts Quick Start to Struts technology
In this post I will show you how you can quick start the development of you
struts based project... of views using JSP
pages. This enables the developers to create the GUI
How to register domain name
, Rediff, Yahoo and Sify. For using a certain domain name users have to use...Domain name registration is the first step to start any website. Website requires a name to be known and referred to on the web. Domain name
SEO Article Website to Increase Traffic,Increase Visitors using SEO Article Website - SEO Tips and Articles
. It is a
simple thing, which must be considered while writing articles. If taken... rise in your
website traffic by using articles which have been optimized... with the information required by them based on the keywords entered by
them. The job
Writing Great Articles is Difficult
Writing Great Articles is Difficult
... information required to satisfy reader's curiosity but articles must always be written... never be ignored while writing articles, either by some ?ghost writer' or by you
How to Register Domain Name?
establishments to register a domain name and having your own web presence... offerings to a more wider community of buyers. To register a domain name is just... attributes in regard to registering your own domain name.
Why it is important to have
Writing articles,write articles earn money,earn money from
website,articles income
Writing Articles
... online, but you are not
very sure where to start or afraid of writing articles... or investing huge time. Writing
the articles for website for making money is much
Public Java Keyword
outside
the class in which it is declared.
Using a public
Keyword within...
Public Java Keyword
public is a
keyword defined in the java programming language. Keywords
Post your Comment | http://www.roseindia.net/discussion/32653-Writing-Quick-Articles-Using-Information-in-Public-Domain.html | CC-MAIN-2014-41 | refinedweb | 1,078 | 50.43 |
Before parsing through the Rvalue references draft in C++11 standard, I never took Lvalues and Rvalues seriously. I even never overhead them among my colleges or any c++ books (or may be I would have skipped that part thinking it to be of no importance). The only place I find them often is in compilation errors, like : error C2106: '=' : left operand must be Lvalue. And just by looking at the statement/expression that generated this error, I would understand my stupidity and would graciously correct it with no trouble.
int NextVal_1(int* p) { return *(p+1); }
int* NextVal_2(int* p) { return (p+1); }
int main()
{
int a[] = {1,2,3,4,5};
NextVal_1(a) = 9; //Error. left operand must be l-value
*NextVal_2(a) = 9; // Fine. Now a[] = {1,9,3,4,5}
}
I hope with the above code you got what I am saying. When I went on to read that RValue reference section of C++0x my vision and confidence started shaking a bit. What I took for granted as Lvalues started appearing as Rvalues. In this article I will try to simplify and consolidate various concepts related to L & R values. And I feel it necessary to upload this article first, before updating C++11 – A Glance [part 1 of n]. I promise to update it also soon.Please note that this effort mainly involves gathering scattered information and organizing it in a simple form so that one may not need to Google it again. All credits goes to the original authors.
An object can be viewed as an region of storage and this storage region can either be just observable or modifiable or both depending on the access specifier associated with it. What I mean is:
int i; // Here the storage region related to i is both
// Observable and Modifiable
const int j = 8; // Here the storage region related to j is only Observable
// but NOT Modifiable
Before proceeding to the definitions, please memorize this phase : "The notion of Lvalueness or Rvalueness is solely on the expression and nothing to do with the object." Let me simplify it:
double d;
Now d is just an object of type double [ and thrusting l/r valueness upon d at this stage is meaningless ]. Now once this goes into an expression say like,
d = 3.1414 * 2;
then the whole concept of l/r valuess originates. Here we are having an assignment expression with d on one side and a numerical expression on another side which evaluates to a temporary value and will disappear after semicolon. The 'd' which points to an identifiable memory location is an Lvalue and (3.1414*2) which is a temporary is an Rvalue.
At this point lets define them
Lvalue : An Lvalue is an expression referring to an object, [which holds some memory location] [The C Programming Language - Kernighan and Ritchie]
Rvalue : The C++ standard defines r-value by exclusion - "Every expression is either an Lvalue or an Rvalue." So an Rvalue is any expression that is not an Lvalue. To be precise it is an expression that does not necessarily represent an object holding identifiable memory region, (it may be temporary).
int nCount = 0; // nCount represents a persistent object and hence Lvalue
++nCount; // This expression is an Lvalue as this alters
// and then points to nCount object
// Just to prove that this is an Lvalue, we can do the below operation
++nCount = 5; // Fine.
7. A function call is an Lvalue if and only if the result type is a reference.
int& GetBig(int& a, int& b) // returning reference to make the function call an Lvalue
{
return ( a > b ? a : b );
}
void main()
{
int i = 10, j = 50;
GetBig( i, j ) *= 5;
// Here, j = 250. GetBig() returns the ref of j and it gets multiplied by 5 times.
}
8. A reference is a name, so a reference bound to an Rvalue is itself an Lvalue
int GetBig(int& a, int& b) // returning an int to make the function call an Rvalue
{
return ( a > b ? a : b );
}
void main()
{
int i = 10, j = 50;
const int& big = GetBig( i, j );
// Here, I am binding 'big' an Lvalue to the return value from GetBig(), an Rvalue.
int& big2 = GetBig(i, j); // Error. big2 is not binding to the return value as big2
// is not const
}
int nCount = 0; // nCount represents a persistent object and hence Lvalue
nCount++ // This expression is a Rvalue as it copies the value of the
// persistent object, alters it and then returns the temporary copy.
// Just to prove that this is an Rvalue, we can not do the below operation
nCount++ = 5; //Error
By summarizing the above points we can blindly state that : If we can take address of an expression (for further operations) safely then it is a lvalue expression else it is an rvalue expression. It makes sense right, as it preposterous to carry on with a temporary.
Note : Both Lvalues and Rvalues could be modifiable or non-modifiable. Here are the examples:
string strName("Hello"); // modifiable lvalue
const string strConstName("Hello"); // const lvalue
string JunkFunction() { return "Hellow World"; /*catch this properly*/}//modifiable rvalue
const string Fun() { return "Hellow World"; } // const rvalue
Can an Lvalue appear in a context that requires an Rvalue? YES it can. For example,
int a, b;
a = 8;
b = 5;
a = b;
This = expression uses the Lvalue sub-expression b as Rvalue. In this case the compiler performs what is called lvalue-to-rvalue conversion to obtain the value stored in b.
Now can an r-value appear in a context that requires an l-value. NO it can't .
3 = a // Error. Here 3 which is an RValue is appearing in the context where
// Lvalue is required
Thanks to Clement Emerson for readily helping me in gathering and organizing this information
1..
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Quote:a reference bound to an Rvalue is itself an Lvalue
a = b;
a
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/313469/The-Notion-of-Lvalues-and-Rvalues?msg=4134276 | CC-MAIN-2015-48 | refinedweb | 1,034 | 58.11 |
Terrific Tunisia: Kate Garraway finds mint tea and Mediterranean magic in north Africa
The nearest my husband and I usually get to a week flopping on a beach is an evening at our local Italian, where paintings of the Mediterranean hang on the walls. Nowadays, it all comes down to planning far ahead and relying on doting grandparents.
History at hand: The ruins of ancient Carthage sit just beyond the modern city of Tunis
On this occasion, Derek and I were looking for somewhere hot and not too far away; somewhere relaxed, but with stylish accommodation so we could enjoy some glamorous dressing-up; somewhere with plenty of culture for when we would be tempted to leave our sunbeds.
We found all that and more in Tunis, or, more specifically, at The Residence, ensconced in one of its suburbs.
The hotel reflects the city itself, a blend of laid-back Africa and the elegant Med. Designed with a Moorish-Arabian feel, everything was cool cream with flowing open spaces and an endless perfume of neroli and jasmine. No gold taps or wall-to-wall gilt, thank goodness.
After a two-and-a-half hour flight and a short taxi ride, we had left the cares of parenthood behind. Any pangs of guilt about leaving our two children were quickly assuaged by the sumptuous spa, great food and the blissful feeling of lying by the pool in the 26-degree autumn heat without worrying that a little person might be about to fall in or upturn a bucket of water over your newly applied sun cream.
The truth is that we could have spent all of our three-night stay within the walls of the hotel grounds, but we also wanted to get out and about and see Tunis and ancient Carthage. Oh, and do some shopping, of course.
The Residence is just 20 minutes or so from Tunis and slightly less to Carthage, one of the greatest cities in the ancient world, founded in 814BC by the Phoenicians. You can get up close and personal with the ruins. Maybe too much so. I began to wonder if we were damaging them.
It's worth hiring one of the official guides who can bring the place to life, and the attached museum has some extraordinary mosaics on display, a replica of one of which now has pride of place in our garden.
The next day we headed to Tunis, which looks African one minute, continental the next.
The capital is bisected by the Avenue Habib Bourguiba, resplendent with a fabulous art nouveau theatre, lots of pavement cafes with names such as Cafe de Paris, French-style patisseries and some lovely manicure salons. Yet just as you get used to the Mediterranean vibe, the end of the tree-lined avenue holds an exotic surprise. Suddenly you are plunged into a scene straight out of The Arabian Nights - the Unesco-listed Medina.
Into Africa: Kate found sunshine and adventure in Tunis
This was once a walled city-within-a-city with a population of more than 100,000. Today, just a small proportion of the population lives here. It is a maze of twisting lanes and narrow alleyways, and you really need to have a guide with you.
At first, the stalls seemed to offer standard tourist fare - leather slippers, bags and such like - but you need to persevere and go deep into the heart of the covered rabbit warren.
There lie the souks or marketplaces, each one with its original trade. We found the hat-makers' alley, the wool weavers, the copper-beating sector and the carpet-makers street.
We came back with a runner for our hall, but I have to admit we probably paid way too much for it.
Bartering is not my strong point. At first, we were pleased to have got 20 per cent off the original asking price, but that night some friends we made at the hotel told us we should have stood firm and got a proper bargain.
They also advised us that we should shop wearing our most causal clothes and leave any jewellery or watches behind. Apparently, the merchants are expert at judging the wealth of a potential customer and set their prices accordingly.
We consoled ourselves with the thought that it had been a great experience - and that we did get some lovely fresh mint tea thrown in.
Much less intense is the beautiful village of Sidi Bou Said, which you reach by hopping on a fabulous little train that trundles along happily for the 20-minute trip. When we got off we could have been in Greece, for this is a picture book- beautiful replica of many Greek villages - all brilliant white and blue.
Named after a saint who came to write poetry, the village attracted painters and writers from across Europe in the last century who soaked up the views and the heady cafe society.
Despite coaches of tourists, there was still a magic about the place. The striking Tunisian doorways, mostly blue, and the ornate window frames and balconies with endless trailing bougainvilleas made this a not-to-be missed part of the trip.
Up and over the hill we stumbled upon the Cafe Chabaane and enjoyed more mint tea with pine nuts served in small decorated glasses. We watched the boats bobbing in the tiny port below and felt a million miles away from our normal hectic lives.
Derek has long talked of taking up golf, so when he heard that the hotel had just opened a golf course, designed by Robert Trent Jones Jr., he dragged me along for a look.
We were shown around by the golf pro, a Tunisian named Tarik, who had something of Enrique Iglesias about him. He told us that more and more women were signing up for golf lessons. This seemed to please Derek and, to my surprise, when he began hitting some balls, he actually showed some promise.
The Residence's sales director is a woman from Yorkshire called Helen Ben Salem who met and married a Tunisian 30 years ago.
Pool your resources: The Residence hotel is an ideal hideaway for tired (celebrity) parents
It was she who suggested we drop in to Sadika, a shop within walking distance of the hotel and which I would definitely have passed by without a second glance.
Sadika herself welcomed us into this emporium of beautiful blown glass ornaments, jugs and glasses and told us she had trained in Venice. And it showed.
Suffice it to say I had a very carefully packed little box to take home. That beautiful glass ornament now sits on our mantelpiece in North London to remind me of our precious long weekend away. And, so far, the children have not yet managed to break it.
Travel Facts
British Airways has return flights from Gatwick to Tunis from £135 return (0844 493 0787,).
Rooms at The Residence from around £270 a night B&B,.
Most watched News videos
Terrified boy's hilarious reaction to finding spider in canoe
'I have no answer': Meghan Markle flops British knowledge quiz
ISIS militants in horrific public executions across middle east
Armed gangs fight it out in mass road brawl
Shocking video shows ISIS destroying US-made M1 Abrams tank
Pilot saves lives of 439 passengers by narrowly avoiding collision
'Drama queen' cyclist films a series of near misses over a year
Harry Caray calls Cubs World Series win in stirring Bud ad
NYPD searching for suspect who punched, killed 64-year-old man
Paul Connolly meets thief's Tara and Lauren for first time
Shocking moment Kumbuka tries to smash glass at London Zoo
Is this the creepy moment the corpse of a girl OPENS her eyes?
Would you want a pilot to tell you if the plane was about to...
Pictured: The incredible submerged plaza you have to walk...
Keeping it real: The no-filter Instagram images that prove...
An exploding volcano, a nosy killer whale and a very brave...
Hilarious image that shows how the Sydney Opera House looks...
Why the hotel of the future could be an eco-pod robot...
Pilots distracted by phones, unwashed blankets and coffee...
Flight attendant becomes social media star after dressing up...
Welcome aboard? Don't get too excited: The US carrier that's...
A runway built on ice, a landing strip on the edge of a...
From the thorny dragon to the Mexican walking fish: Where to...
'Dad of the year' hands out sweets to plane passengers on...
Share what you think
We are no longer accepting comments on this article.
| http://web.archive.org/web/20161104075431/http:/www.dailymail.co.uk/travel/article-1337951/Kate-Garraway-enjoys-magical-Mediterranean-weekend-Tunis.html | CC-MAIN-2019-39 | refinedweb | 1,440 | 67.08 |
How would you like to help fix the Internet?
One of the efforts I’ve been contributing to during the last year is the Bufferbloat project, a group of experienced Internet engineers who believe that excessive buffering and poor queue-management strategies may be the real villains behind a lot of network problems commonly attributed to undercapacity.
Before we can solve the problem, we need to measure and map it by collecting a lot of packet-propagation-time statistics. Awkwardly, we suspect that one of the services being screwed up by bufferbloat-induced latency spikes is the Network Time Protocol. So…Dave Täht (aka Dave from my basement) is trying to build a device he calls the Cosmic Background Bufferbloat Detector. The CBBD would be a flock of routers scattered all over the world, watching NTP packet timings using a common timebase independent of NTP, and sending data back to a collection server for analysis and visualization.
That’s where I, as the lead of the GPSD project, come in. GPSes are an obvious candidate for a high-precision NTP-independent time service. But there’s a problem with that…
With rare and extremely expensive exeptions, GPSes only report time to a hundredth of a second, at most, in their data stream. And we’ve found by experiment that SiRFs, the chip used in 80% of consumer-grade GPSes, has about a long-period wobble of up to 170 milliseconds in its time-reporting latency. This is no good; NTP time is supposed to be accurate to 10 milliseconds, so for diagnostic purposes we therefore want a timebase about an order of magnitude better than that or at about 1ms accuracy.
There is a way to get this from GPS. GPS chips have an output called 1PPS, a pulse that’s emitted at the start of each GPS-clock second with accuracy to 50 nanoseconds. So, in theory, simple: you use the 1PPS to trigger an interrupt on your host machine, latch that as top of the second, and use it to condition your clock. Even with the expected amount of interrupt-processing overhead you can expect this to keep your local clock accurate to the common timebase to about 10 microseconds – two orders of magnitude finer than our 1-millisecond accuracy goal.
GPSes, or at any rate the sort of inexpensive GPSes you’re limited to when you’re contemplating deploying a hundred or more attached to CBBD routers, are simple beasts. They’re built around a module like the SiRFStar II or III that’s basically a single chip with RF and signal-processing stage for the GPS. That module ships TTL-level serial data, with two lines for TX/RX, a ground, RTS, and a fifth wire carrying the PPS strobe (usually mapped as the DCD or Data Carrier Detect line).
Typically these wires are carried to a serial-to-USB converter such as a PL2303 which provides the data path off the device. Yes, some GPSes go to RS232, but that’s increasingly uncommon and we couldn’t use those anyway because the inexpensive routers we can afford to deploy by the hundred only have USB ports. Yes, serial-to-USB adaptors do ship an event corresponding to a change in DCD line state; turns out USB latency costs you about 50 microseconds of slop, which is well within our maximum error budget.
This is where it gets messy.
You see, in order to cut costs (or something) most GPS manufacturers drop the 1PPS strobe line on the way out. They could connect it to the DCD input on the serial-to-USB converter, but they don’t.
Now let me introduce you to two devices. Exhibit A is the Globalsat BR355. I have one on my desk. This is an extremely typical consumer-grade GPS mouse based on the SiRFStar III. It doesn’t ship 1PPS, though older versions sold under the same name apparently did – removed to cut costs.
Exhibit B is the ZTI Z050, advertised as a USB navigation and timing dongle. It fits a GPS chip and serial-to-USB converter in a thumb-drive case. It uses a different, non-SiRF chip called a Trimble, but that’s a detail; it ships to the converter over TTL just like the SiRFStar. But the ZI050 does carry the 1PPS trace to the serial-to-USB converter, and you can see PPS events on the USB bus. ZTI advertises 1ms accuracy,
Essentially, the logical differences between these devices come down to the presence or absence of one trace on the circuit board.
The BR-355 costs $36. The Z050 says “call for quote” and I was told $950. Yes, that’s right; that one PPS trace costs $914.
Now, part of this is the North American distributors marking the device up insanely. A European friend caled ZTI direct and was quoted €175 or about $225. That’s not the highway robbery the distributor was attempting (they’re called “Omnicor” – try not to give them your business) but it’s still bloody ridiculous.
So I’m trying to think up a solution, and it occurred to me that building your own USB GPS from parts and a custom circuit board isn’t that complicated. One of my GPSD devs has actually done it. We can’t use a homebrew GPS for this deployment, that wouldn’t scale to a hundred units, but …
…isn’t there an opportunity here? It ought to be possible to manufacture a timing dongle like the ZI050 really cheaply; remember, the PCB-level difference is between it and that $36 BR-355 is basically one trace. One design engineer with connections to a Taiwanese job shop ought to be able to get a thousand of these cranked out at barely $10 a pop.
That’s what I’m looking for. These clowns should have competition. So, calling all open-source hardware engineers – can we do this thing? Spec a parts list, design a PCB to fit in a thumb drive, publish it as an open design, and then actually get the little sucker manufactured?
There might even be money in this. The Bufferbloat project wants at least a hundred of these for the CDDB, and the thumb-drive form factor could make it really popular with laptop users.
Anybody feeling entrepreneurial?
While I’m certainly willing to believe that dubious buffer management in low-level internet code is a problem (because it’s software, and most software is badly broken, for a lot of good sound economic reasons), my gut feel is that root cause of any given slowdown that anyone is seeing comes down to “fuk’n youtube”. Optimizing protocol stacks is gods work, but isn’t going to do a huge amount of good handling a flood of video through sort-of-but-not-really-video-friendly pipes.
I’ve looked at the specs for 32 Channel San Jose Navigation GPS 5Hz (99 USD on sparkfun) and in the specs pin7 is iPPS.
Not sure if it would be quick enough, but an Arduino with an ethernet shield & small custom board for the GPS? A GPS shield exists, not sure it it conflicts with the ethernet shield.
Could run standalone? or hook upto a PC via USB.
Re Sparkfun and Arduino: the dev I mentioned earlier (Chris Kuethe) built his own rig from Arduino parts after discovering that none of the Sparkfun boards bring out 1PPS. (He rechecked this while I was on the phone with him a couple of days ago.) But a hand-build approach won’t scale to the required number of units.
I haven’t studied the various boards and chips, but Sparkfun carries lots of GPS modules:
If you can identify one of those that meets your needs, we could fairly easily hack something together, test it, and then either recommend that (probably < $150) or do as you suggest and lay out a new board.
Sparkfun also has a GPS buyer’s guide — haven’t read it yet.
I see Mouser sells the Maestro A1035-H for about $20, and it has a 1PPS pin (pin 15):
A carrier board with an FTDI chip would be all that’s needed, plus some sort of (GPS-transparent) enclosure to make it pretty.
It looks like regular Mouser links are not made for human use. This one will work though:
If it is just a missing trace, would it be feasible to simply solder a wire on it in 100 units?
And Adafruit has one for $40 they claim offers precision time:
But that one and some of the ones at sparkfun use a Sirf chipset. Does its wobble extend to the pulse pin you’re interested in?
>And Adafruit has one for $40 they claim offers precision time:
The Adafruit looks possible.
>But that one and some of the ones at sparkfun use a Sirf chipset. Does its wobble extend to the pulse pin you’re interested in?
No. The wobble is in the latency of time delivery off-chip via the TTL-pins-to-RS232 or -USB path.
I believe one can obtain both a GPS “shield” and a USB interface for an Arduino at a reasonable cost. Its all open-source too.
Actually, there are now several industrial modules available. For example, at mouser:
Is there anything wrong with the Garmin 18x LVC or Garmin 18x-5Hz?
Those appear to be in nice packages for around $60 and claim 1 us accuracy for the timing pulse.
>Is there anything wrong with the Garmin 18x LVC or Garmin 18x-5Hz?
The Garmin 18 is really frustrating. It comes in three variants. The PC and USB ones don’t bring out 1PPS. The LVC does – but the data cable ends in bare wires! You’re intended to crimp your own connector and interface to them.
BTW, electronic parts pricing is extremely volume sensitive.
Often, you can buy something like this much cheaper than you could possibly make it, or even have a Taiwanese job shop make it, unless you’re prepared to make at least several thousand.
If you’re fortunate, the price break for a part comes at small quantities. For example, one balun I use costs $16. Or $4 if I buy 10. That’s right — I can buy 10 of them cheaper than I can buy 3 of them. And the price break for buying 1K or 10K of something is often 90% or more.
Huh. Didn’t see Andrew Filer’s comments before I mentioned mouser. Moderation queue stuff?
Here’s a tutorial using the garmin device and gpsd:
A couple of jobs ago, I got involved with commercial GPS hardware; you’re right that the manufacturer’s markup in small quantities is insane, but that’s why scaling up is so important.
Where are you planning on deploying these USB-GPS widgets? If you’re going to be using them indoors, you’d better plan on having an external antenna port.
>Where are you planning on deploying these USB-GPS widgets? If you’re going to be using them indoors, you’d better plan on having an external antenna port.
Indoors near windows. We think this is enough skyview for time with modern RF stages, but that’s an assumption that will require careful testing with prototypes.
This can be done with these two, an enclosure and mounting hardware, a passive antenna, and some bits of wire:
It can be bus-powered if the GPS is jumpered for low-power startup. Active antennas would need an extra 3.3V regulator to power the GPS.
Unfortunately, the device at Adafruit has no PPS signal at all.
>This can be done with these two, an enclosure and mounting hardware, a passive antenna, and some bits of wire:
Hackers. You tell them hand-build approaches won’t scale, and they respond with – hand-build recipes.
I know that the better the clocks, the better you measurement, which needs very good leads (witness the neutrino speed problems). So I understand your drive to get very precise GPS clocks.
But your problem seems to one that could be solved by Network Tomography.
With tomography you can work with timing differences, eg, return trafic times and route difference times that hit specific nodes. Then you might be able to do away with global time precision and do with only very precise timing differences.
I am sure I am missing something here. But from your post I do not understand why you need high global (absolute) time precision instead of relative time difference precision.
>But your problem seems to one that could be solved by Network Tomography.
The CDDB approach is a variant of delay tomography. But if you read the description of that technique carefully you’ll see that it requires a common timebase, otherwise the delay timings might be corrupted by clock skew.
>I do not understand why you need high global (absolute) time precision instead of relative time difference precision.
We don’t in fact need absolute time. What we do need is a common timebase – the ability to match timestamps across the net. The valuable thing about GPS is that it’s a low-cost way to refer to a common timebase at global scale.
@esr
I would love to see a “useful*” link to an overview of the CDDB approach. Could you post one?
I once discussed whether it would be possible to use network tomography to determine the physical (cable) distances between an unknown networked computer and routers and servers with known geographical positions. I still think that must be possible.
*Useful: I can stand quite a lot technical and mathematical content. But not the level to reproduce a tomography imaging program.
> Hackers. You tell them hand-build approaches won’t scale, and they respond with – hand-build recipes.
Oh, I know it won’t scale, but that gives someone a good idea what the circuit needs to contain.
Eric,
Make a real proposal, take it to Kickstarter. If you get N backers/customers, have it built.
Otherwise, the idea probably sucks.
No idea if this is useful but my go to place for little devices like that is a Canadian company called Phidgets (needless to say I am a major stockholder who will personally benefit massively from this endorsement. :-)
They have a GPS device that claims a timing accuracy of 300ns, and is about $80 in quantity.
I’m not knowledgeable enough to judge the merits but I thought I’d mention it to you. I am sure also that I don’t have to tell you that at that level of precision you are going to have to know physically where the device is located, along with the ephemeris data on the satellite, and adjust according to the propagation delay. But I am sure you already thought of that.
Sparkfun.com has modules for $50 or less. Many routers have internal serial and run at 3.3v
Adafruit.com is another source, as well as some of thebuild-a-drone places.
You don’t really have a spec but I’ve built something like that. Sparkfun’s boards are opensource.
And there is a variant of the skytraq that can do 10nS from gps – it does only timing from gps. – schematic and Eagle (circuit board) files available.
You would need an antenna and then connect the RxD and the PPS.
20Hz – this is what I use on my harley.
They also have a devkit and I can do my own firmware (only 10Hz) for the SkyTraq itself.
I am not sure why you would use routers for this project. If you intending to deploy routers for traffic analysis then OK, but I would still use a TS7200 rather than an actual router. Or, if you are going to use a router, use one that has open code. If that is what you’re doing then I would suggest that you consider using the PTP protocol instead of trying to build something around GPS. And, if you rewrite PTP to include the use of RDTSC then you will have your extremely accurate time base.
The Phidgets link Jessica mentioned looks interesting.
If you really want someone to build a few hundred, you should work with seeedstudio.com or one of the guys who already works with them, like dangerousprototypes.com
They have several business models available already, and I’m pretty sure you’ve influenced them :-)
Using a $3 chip (ATtiny2313), maybe $5-10 system cost not counting the GPS, I can send out a non-NMEA character, e,g @ or # at the PPS (1 char time jitter if the GPS is already sending), 100uS at 115200 baud), and then pass through the rest of the NMEA stream. So it would be GPS -> AT2313 -> USB serial. Of course you can use two USB UARTs
Note the antenna is important – either an active antenna you can place near a window, or at least a larger, maybe amplified one, or the USB cable will have to be long. If it is one of those modules you will need to keep it away from a router doing wifi.
Sparkfun has a $50 module with PPS – 3.3v.
The Phidget is basically a SkyTraq (I can’t see if it is the 624 or 628) with a USB connection and does NOT have a PPS output broken out.
Take the SparkFun SkyTraq Venus 628 module, and add an ATtiny4313 (baud rate friendly crystal, with ISP header for programming and jumper block to select how to connect the serial ports and for initial GPS setup and general breakout so the board will be useful elsewhere). Add a USB to serial chip and connector, PL2303 or FTDI and 5v to 3.3v regulator and backup battery/cap. Include the SMA magnetic antenna and you have the system.
@tz:
Disclaimer: I haven’t looked at the Phidget and don’t know how it does what it claims to.
I could be wrong, but I thought the PPS output was an example of an enabling technology rather than a requirement.
The requirement was stated to be met with least 50us of precision at the computer. If the Phidget really somehow lets you get within 300 ns, that more than meets the requirement.
What chance is there that there is an Android phone that brings out the 1PPS signal from its internal GPS. Or, if such a thing doesn’t exist, that you can convince Google to make such a thing happen for the next-gen Nexus platform?
Then your future cast-off Google Nexus could be a nice USB-based GPS clock.
Google must be concerned about buffer bloat.
I dont understand your comments on scale. What exactly do you want? Do you want ~100 devices for your project, or do you want someone to start selling a cheap USB/GPS timer?
If its the former then ~100 devices could be hacked together in about a week. And if the hack only needed basic soldering skills then some members of your project will be able to hack their own anyway.
Whats your target unit cost?
If all you need to do is solder on a wire expose the 1PPS signal in the GPS shield, flash the Arduino, drill a couple of holes in a project box & plug it all together then I don’t see why you need a custom design.
And sparkfun (or others) might be open to revising their design of GPS shield to expose the 1PPS line making the build easy for anyone on your project.
Its not an optimal solution, but gets the job done.
Eric,
Perhaps the one-off would scale. What if a few of your GPSd folks taught a free “design and build” lesson at their local hacker spaces.
10 spaces, 10-12 students per class, you create 100-120 devices, which can then be used with your project.
Let the hackers work, man, let ’em work.
The ZTI is expensive because it is accurate to +/-25 nS and that goes over USB with corresponding driver. The SkyTraq should be accurate to +/-30 (and SkyTraq has a variation that can go to +/-10 with correction – it says the pulse is that early or late).
@Patrick – I went to the Phidgets site and got the docs and pics available. To do 1PPS you need that pin broken out or attached somewhere and it isn’t and there aren’t any apparent connections.
@Larry – zero chance on android, I don’t think OpenMoko does either. Cast off cell phones (used, refurb) are also expensive, at least those with GPS. Even the cheap chinese android devices won’t have this.
@alanuk – Good Point – I forgot to ask. What is the quantity? 100? 1000? More? Most of this is the tooling charge so it should be a lot cheaper per-unit in quantity. Perhaps a kickstarter project is in order.
>What is the quantity? 100? 1000? More? […] Perhaps a kickstarter project is in order.
We’re looking at 100 firm, two or three times that if the project really takes off.
Kickstarter is an interesting idea for gathering funds for a fabrication run. I’ll investigate it.
@tz:
But they claim 300 ns timing accuracy. If they actually provide that at the computer, that solves the problem without requiring the 1PPS pin. If it’s a marketing lie, that’s a different story.
>But they claim 300 ns timing accuracy. If they actually provide that at the computer, that solves the problem without requiring the 1PPS pin. If it’s a marketing lie, that’s a different story.
A problem with the Phidget is that there’s not enough structure in its datastream, as described, to make it self-identifying the way a GPS is. That really, really complicates things – enough to disqualify it, actually.
If you can characterize the average of that wobble to within X ms, and the clock on your computer was accurate enough, wouldn’t it be possible to derive the time to within about X ms eventually by sampling?
You’ld presumably need to do it for each GPS (sub-)model, though, so I’m not sure if that actually is much of a win…
>If you can characterize the average of that wobble to within X ms, and the clock on your computer was accurate enough, wouldn’t it be possible to derive the time to within about X ms eventually by sampling?
Yes, but X would be significantly larger than our target accuracy. The wobble is very noisy.
@esr:
One thought about why they didn’t hook up that pin. It might upset/annoy drivers and/or software (perhaps even more on that other operating system) to think that the modem status keeps arbitrarily changing like that.
Eric,
I have an Sure Electronics MG1613S here that provides PPS after the soldering one additional wire. See these two links:
The chip is a MTK-3301 and it works fine.
Winter said: I once discussed whether it would be possible to use network tomography to determine the physical (cable) distances between an unknown networked computer and routers and servers with known geographical positions. I still think that must be possible.
If one Knew (to a sufficient, read very high, level of accuracy) the delay characteristics of the routers and network hardware that were Unknown – and if they were consistent enough [or the outliers obvious enough to filter out] in that delay, yes, it seems like it ought to be possible.
The problem is that one doesn’t know their delay characteristics very accurately, does one? Realistically I’m not sure someone outside the network can determine them at all, let alone sufficiently accurately to do cable length determination.
Determining the length of a piece of wire via timing is … tricky*, shall we say. The speed of light is very fast and thus the delays from wire are ver’ ver’ small (about a microsecond per mile!). At that level I’m not sure I trust my local network hardware to process the packets in such a way as to not destroy the timing…
I’d expect “everyday” network hardware to have jitter far exceeding wire-length delay – or at least close enough to it to throw off all the math.
(Alternatively, is there some Deeply Very Subtle analysis that somehow makes all that go away and makes it plausible to do sub-microsecond timing analysis of remote network traffic?
That would be awesome. Literally.)
(* I recall my mother talking shop about that, related to writing and debugging software for cable/fiber testers that did Very Complicated Analysis And Math not only find out that there were faults in a very long run of fiber, but where they were along it.
Trying to do that sort of thing by analysing what would sort of have to boil down to ICMP reply time jitter or something like that?
I’m dubious about the information being present, literally.)
@Sigivald: What I assume ESR is after is placing a “router” right on the port where the internet is connected with a GPS to compare network time protocol (NTP). Every such timerouter would know the precise second, so the NTP latency from near and far could be measured.
Note: if the various NTP sites go through different numbers of switches, that might suffice..
Dunno if it’ll help, but I posted this to the Arduino forum. Lots of folks over there who have experience that could help.
I wonder if crowdsourcing this could be effective. Or perhaps contact Limor Fried? She’s enthusiastic about open source, and surely knows something about getting stuff built.
A naive question from someone unfamiliar with the dirty details:
Assuming you get the time references built and working, so that you now have accurate delay measurements, how can you tell the difference between delays caused by buffer bloat and delays caused by simple bad routing?
Eric,
although you’re justifiably partial to GPS, have you also researched radio-controlled clocks as a time reference for your project? Wikipedia, admittedly without citing a reference, says that they, too, can give you your desired precision of 1ms, limited by the margin of error for the delay in radio transmission.
Advantages:
(1) It’s long-wave radio, so it’ll naturally work everywhere in the building, even with no skyview at all. You won’t need “careful testing with prototypes” to check for adequate reception.
(2) There’s a dude with a website by the name of Jonathan, who appears to have built something similar to what you want, parts list and all, for about $50. His heart seems to be in the right place for an open-hardware project, since he released his source code for the Linux client that listens to his device. Perhaps the two of you could talk about massaging what he already has into the mass-producible, thumb-drive-sized solution you need?
Disadvantages:
(1) Because each receiver will sit at a different distance from the atomic clock whose transmitter it listens to, each will need individual calibration before delivering the precision you need. And Jonathan’s default mode of calibration involves—you guessed it—GPS and NTP.
(2) The atomic-clock radio transmitters across the world don’t all broadcast at the same frequency. Hence, your PCBs would probably need to accommodate some variety of receivers, in smaller quantities of each. If so, this will keep the price higher than with the GPS approach.
But hey, maybe the disadvantages can be worked around, so what’s the harm in putting this idea out there? Whichever way you end up doing it, good luck!
>have you also researched radio-controlled clocks as a time reference for your project?
I have. In addition to the precision and signal-availability issues noted by a previous commenter, we’d be looking at a complicated deployment problem because we’re looking to set up monitoring routers all over the world – multiple radio clocks for different regional time services. GPS is plain simpler.
Eric. Want a low cost high precision GPS. You can buy any number of Motorola Oncore UT+ receivers for under $20 each on eBay. These have a 1PPS with 50 nanosecond (1 sigma) error. That is better than you need and cost is under $20.
The telecom industry has been dumping thousands of very high end GPSes for dirt cheap and some how they end up in China and then on eBay. Even with shipping prices are low and most of these Chinese sellers are good and honest.
The current, introduction version of this use is an MT+ that sells for $60 and they have the error down to 2 (yes two) nanoseconds. $60 is not a bad price for that.
If it mist be USB you’d need a converter cable. And of course any GPS needs a good clear view of the sky
>You can buy any number of Motorola Oncore UT+ receivers for under $20 each on eBay.
These all seem to be uncased boards. That means we’d have to find cases for them and assemble cased units, whivch is exactly the sort of thing that doesn’t scale well up to a hundred-unit deployment.
The comment above about radio clocks sounds good until you actually try. First off reception is not a sure thing. and you are not likely to get anything at all in the day time. While until around mid night. next the time code that is transmitted is only good to whole seconds.
If you need software to decode the bits, look at the NTP source code, there is a reference clock driver in there
If you want a hardware solution coax makes t and digikey sells it. Not expensive. the little units comes with an antenna
But these are, like I said only good for about 1 second resolution and only at night. You could do better but you’d need a big loop antenna and to place it outdoors for from a house or electronics
For best bet will be to buy up some timing GPSes that are being surplussed by the telecom industry
Not that this would be cheap, but this has to be the best GPS hacker dude out there:
This electronic parts search engine might help someone find something useful.
Octopart
It is a Y Combinator-backed startup company based in New York City.
Good luck.
Cases would likely be the easy part. If PacTec doesn’t have a case style in stock that would work, at 100qty or more, they would no doubt be more than willing to customize one of their designs, or even create something from scratch.
Have you considered doing all this in hardware?
There are many microcontrollers that have simple TCP-IP interfaces. I’ve even found a number that have libraries for getting NTP data. It should be pretty easy to hack something that does *everything* together. Just plug it in, connect ethernet, make sure it can see the sky, and you’re good.
As an added bonus, you could know the cycle-accurate timing required for every operation, so you’d get much better precision.
I’d love to try to design something, if you’d like. It sounds like an interesting project.
> whivch is exactly the sort of thing that doesn’t scale well up to a hundred-unit deployment.
You’re wrong.
A 100 unit deployment is easily hand-buildable. You’ll need an order of magnitude more (maybe 2 orders of magnitude) before you can justifiably make the claim.
Certain of the SkyTraq modules have a “sync to UTC second” option that will insure the start bit of the first character in the NMEA stream. I’m not sure about the one from Sparkfun but their GLONASS/GPS module implements this feature and has a standard USB to serial chip (silicon labs). They have an off-the-shelf EVK board which was $100 a year ago. The module name is S4554GNS-LP-EVK It includes a reasonable magnetic mount antenna but I use the one from sparkfun as it is more sensitive but probably not necessary. I don’t know if they have them in quantity or if there is a quantity discount.
If a zero-jitter UART startbit is adequate, perhaps these can be used. There is no enclosure, but the hardware is “off the shelf”.
The SkyTraq Venus628 apparently will sync to the UTC second IF that mode is set AND IF the update rate is 1Hz, so that might be a cheaper alternative, though the EVKs are the same price as the GLONASS modules from SkyTraq. The Sparkfun module would need something like the FTDI 3.3v TTL (arduino pro) adapter and a few wires run or the board layouts merged – the FTDI has a built-in 3.3v regulator which provides 50mA, and the 628 in low power acquisition uses 50mA. And an antenna. You can get immediately three off the shelf parts plus 4 jumper wires, then add some kind of enclosure, do some setup and be up in a few minutes.
It is also possible that other chipsets – I would almost bet Garmin – do sync to UTC or have it as an option.
Two other things haven’t been clarified.
1. Cost. $50? $200? More? Less?
2. Do you really need true and separate PPS or would a “start-bit edge at UTC” suffice?
If you can deal with start-bits…
I don’t know of a consumer GPS that uses the SkyTraq chipsets, but google shows which is bluetooth but they may also go out USB. Most do. – at $40 it might pay to get one, if it is SkyTraq it should be configurable to enable the sync to UTC.
If the Garmin, MTK or uBlox based GPS “mice” happen to sync, you could just use those “off the shelf”.
You need to open them up or otherwise be able to see the serial stream to see where it is in relation to the PPS. Any PPS will do as they should all sync, but if you have a number of different chipset GPS mice it may pay to test them to see which ones do what.
>1. Cost. $50? $200? More? Less?
It has to be low enough that more than a hundred volunteers will be willing to buy a $99 router plus the GPS and install it. I think that means $99 more at absolute maximum. Realistically I think we have to come in under $75.
>2. Do you really need true and separate PPS or would a “start-bit edge at UTC” suffice?
We could probably live with the accuracy limit of a start-bit edge, but that kind of reporting stream has other implications that are a deal-killer..
Please no “Just drop GPSD” suggestions” in response to that objection. Writing a custom USB-port monitoring daemon is exactly the kind of time-consuming rathole we don’t want to go down, the software equivalent of all the well-meant but fundamentally wrongheaded suggestions that we should assemble a hundred custom devices from parts.
Think whole-systems engineering at scale, people – we need a solution with low complexity and downstream-maintainance overhead, not fiddly assemblies of custom hardware and software parts. Off-the-shelf hardware and software as well-tested and bulletproof as GPSD might get us there; getting diverted into clever hand-building will not.
@tz:
Three things about FTDI:
1) They actually make cables where the guts are in the USB connector. Several companies (including Sparkfun) sell these.
2) The FTDI chips will send status change information (e.g. DSR or DCD changing) on the next poll from the host, so if you use one of those signals for the 1PPS, it’s probably as good as you get with USB (the rest is all software, and the Linux FTDI driver is open source).
3) Although they cost a bit more, you might want to use one of the high speed FTDI chips (e.g. FT232H) because the host can poll more often and the latency will be reduced.
I actually thought about using the Garmin 18x LVC with one of the cables, but Garmin seems to be doing some sort of pseudo-RS232 level signalling, so you’d probably need couple of inverters between them at a minimum. But maybe one of the other available modules would work.
I forgot that the FT232R has programmable pin polarities.
Latency wouldn’t be as good as a faster part, but you should still be able to have it down in the sub 3ms range.
So you could take an FTDI cable (TTL-232R-5V-WE) and connect it to the Garmin 18x LVC. Just need basic soldering skills and not to be color-blind.
The Garmin pulls a bit more than the cable is rated for (75mA vs 90mA) but that’s just to meet USB spec at power-up — the assembly would violate USB by pulling more than 100 mA at the start, but 99% of the computers out there won’t care.
Total cost: < $85 + 20 minutes of entry-level soldering skills.
> That means we’d have to find cases for them and assemble cased units,
Here is what you need in addition to the boards and cables:
1. A bunch of little plastic boxes from the grocery store
2. Several hot glue guns
3. A few utility knifes
4. A few cases of beer
5. Quite a few pizzas
6. A group of fun, moderately competent people
7. One basement.
Mix these together and you should get your 100 finished units, in one evening of frolicking fun. If I lived closer (and if you bought vegetarian pizza), I’d be down in the basement joining the party.
Or to put it another way, 100 units isn’t really all that many.
>Or to put it another way, 100 units isn’t really all that many.
You’re still not thinking this through. What happens when we start getting bug reports from the field about the devices? What happens if the CDDB is wildly successful and we have more than a hundred volunteers clamoring to install monitoring nodes? How many assembly parties would we have to hold? What about the shipping costs to get all these units deployed?
We need a recipe for a CDDB node that doesn’t require hardware assembly and can be replicated cheaply by anyone willing to plug together off-the-shelf hardware they can order off the net. I’ve suggested designing a Z050-equivalent only because if we can do that once and then off-load the manufacturing onto a Taiwanese job-shop it will become off-the-shelf hardware.
We can’t afford to be diverted from the actual mission – which is operating the CDDB and performing data analysis – into the kind of complications that come with hand-built hardware.
@Jessica Boxer: I wouldn’t want to be the guy with the job of testing 100 units put together at a frolicking fun pizza party. Hand-building electronics takes more attention than partygoers are willing to give. Those workers at Foxconn aren’t having much fun, and for good reason.
@sigvald
In coaxial cabels the speed of light is 200 million meters per second. That is only 200 meter per microsecond.
However, in the situation we have here the return time would be dominated by repeater delays. Effectively, you would be counting hops. With à lot of statstics you would determine the timing characteristics of the paths to the last router. From then on you try to do statistics on the timing of the last arm.
You might try running this product idea past B&B Electronics. Having millisecond-accurate timing over distance is a general need that may appeal to industrial control system users.
Which is why I suggested seeedstudio or dangerousprototypes. seeedstudio will do all that stuff for you (and even pay royalties if you want them). Between them, those guys are set up for lots of different business models, and will probably be more than happy to handle the engineering.
But don’t discount the engineering. If you start with integrated circuits rather than pre-built modules, 100 units is barely enough at the kind of price point you’re talking about to take the risk of building a prototype and then selling units. But that’s dangerousprototype’s business, so you should certainly talk to them:
@Jamie Fenton:
> You might try running this product idea past B&B Electronics.
Sure, but the nice thing about dangerousprototypes is that everything is open source.
@esr
>We need a recipe for a CDDB node that doesn’t require hardware assembly and can be replicated cheaply by anyone willing to plug together off-the-shelf hardware they can order off the net.
I get that. Which is why I think an existing unit is a better plan. There was no soldering iron on my party plan list, just gluing existing, pretested electronics into a plastic box. So here is what I was responding to:
> > You can buy any number of Motorola Oncore UT+ receivers for under $20 each on eBay.
> These all seem to be uncased boards.
Uncased doesn’t matter if it doesn’t need to look pretty, I guess was my point. Tell your users to buy this device and tell them to glue it onto anti static foam in the bottom of a plastic box. Really that pretty much seems to be all you need here. A five minute youtube video and a short BoM seem to be all you need here.
Custom hardware is a huge undertaking, and a royal pain in the ass, in my experience, especially all these persnickety radio thingies. More to the point using a job shop and sell them as a finished product has a million legal traps just waiting for you. If you do, then you are going to have to think about getting FCC part 15 RF Certs and equivalents for every other country they go to. This is not only difficult and time consuming, it is also extremely expensive. From memory you don’t absolutely have to do it in the US (though you open liability if you don’t) but you do have to have the equivalent in Europe to even ship it there.
@Jessica Boxer:
And RoHS and WEEE. In the EU you have to have provisions to recycle stuff you build…
Not only the EU. California has similar requirements.
Eric is making a HUGE mistake by insisting on a “$99 router” (so he can make gpsd part of the solution).
What is needed here (*) is a box that can plug into Ethernet and run NTP, not a box that can run gpsd. The goal was to build a parallel NTP infrastructure, right?
(* if, indeed, it is needed at all. I don’t hear Jim Gettys (who knows more about bufferbloat than anyone) or Dave Mills (who knows more about NTP than anyone) worrying about the effects of bufferbloat on NTP.
>Eric is making a HUGE mistake by insisting on a “$99 router” (so he can make gpsd part of the solution).
Not my constraint. Dave Täht chose the WND3700 router for its low cost and wide availability, and then recruited me specifically because he thought GPSD was the likeliest time source to connect to it. After brief but fruitless detours into radio clocks (scuppered by poor signal availability) and GPS-conditioned high-precision oscillators (waaaay too expensive), we’ve come back to the original design.
>I don’t hear Jim Gettys (who knows more about bufferbloat than anyone) […] worrying about the effects of bufferbloat on NTP.
Then you’re not paying attention, haterboy. Or maybe it’s just hard to hear anything through that 60-cycle hum.
ESR:.
The Phidget emits straight NMEA – if it is a Venus624 as pictured it probably has the sync mode. If GPSD cannot decode vanilla NMEA it has far greater problems. The 628 does have sync to PPS mode.
“All the other sorts of packets that might come in over USB” – this is a router and likely has only one USB port, or if two the other is likely to be for storage. It will appear as a vanilla serial COM port. And the “other sorts of packets” will all be NMEA sentences.
In the Sync mode, the start bit of the “$” of the first sentence is at the PPS. You don’t even really have to packet sniff more than the “$” at higher baud rates, only wait for a .5 second gap as all the sentences will complete in 300mS, so the first $ after such a gap will be the one corresponding to the PPS. Or look for “$GPRMC,” but insure that the previous one indicated (current one indicates?) a lock (with an A instead of a V).
For an OpenHardware solution to work, you either need some assembly yourself, or allow for double the cost. You won’t get them under $100 in 100-300 quantity. The tooling charges for the cases would likely be more. Quantity 10,000 makes prices drop in range, but at 100, even the components (GPS chip, antenna, USB to serial, aux chip) are already around $65 and that is without a board, assembly, or packaging.
>The Phidget emits straight NMEA
Really? That doesn’t fit the previous description, but if so, great. GPSD can handle NMEA just fine.
/me checks.
It it the 1040 you’re talking about? I just read the product manual, and it claims 300ns accuracy but says nothing about the sync-bit feature. Where is that documented?
Just noticed the FTDI part is available on a nice board without a cable attached, suitable for cable attachment:.
You can buy 1 of these for $18 at mouser, or 100 for $15.30 each. You could build 5 units at a time for around $400, and you’ll get something with a 15 foot cable and nice plastics on the part near (or outside) the window.
Honestly, this is the way I would do it. Every hacker has a friend with a soldering iron and a heat gun, right?
.
How would you enclose it?
>Every hacker has a friend with a soldering iron and a heat gun, right?
Probably. Can’t think of any friends who’d take on making a hundred of these, though.
@esr:
> How would you enclose it?
Just to make sure we’re on the same page, the garmin comes in a nice enclosure, so you’re talking about the end that plugs into the computer, right?
That’s what the heat-shrink is for. You’d probably use some smallish stuff on the wire, then some bigger stuff on the whole connector end assembly. Hard to describe. But here’s a picture. Imagine that, instead of the light, there’s a cable coming out, with a little bit more heatshrink around the first half-inch or so of cable.
Alternatively, I believe the FTDI board would fit fine inside this enclosure. Might need to glue it down because it wasn’t designed for it, but it’s no big deal, and would give you a professional look for another buck-fifty.
> Probably. Can’t think of any friends who’d take on making a hundred of these, though.
The making’s not the real problem. As somebody else mentioned, you could have a party. Three people could drink a lot of beer and get 100 of these together in a couple of hours, with only one guy soldering. One guy going slow could probably still do over 10 an hour..
To the extent I want to donate and make the world a better place, I’m happy to build one or two as a proof of concept. I’m even happy to solder a few dozen together every now and again if somebody wants it. But I’m not interested in making a business out of it, taking care of inventory, being responsible for refunds, etc.
In addition to seeedstudio, you might check out sparkfun. They’re domestic, and ship all over the world, and are happy to do new designs:
Since they already have GPS modules, and already have designs with the FTDI, why not point them at this blog post?
.
I know. This is why I keep saying I want off-the-shelf hardware. not a hand-build.
>In addition to seeedstudio, you might check out sparkfun.
How is Sparkfun going to help? Looks like they’re all about parts and subassemblies, not finished products in a case.
> I know. This is why I keep saying I want off-the-shelf hardware. not a hand-build.
Yeah, but you also say you have a target maybe around $75 and a number of units maybe around 100. If we assume that the $35 going price of a USB mouse is about right, then there’s $4000 “extra” for somebody willing to do all the engineering, stocking, warranty, etc. That’s not enough to warrant starting a business, but it might be enough for somebody already in the business…
> How is Sparkfun going to help? Looks like they’re all about parts and subassemblies, not finished products in a case.
You’d be surprised what kind of stuff sparkfun will put in a case. If it’s weird enough and appeals enough to their geekly audience, why not?
Perhaps a business might be more interested in producing a device that has both the “router” and GPS in a single box. This would allow them to claim the entire $175 per system deployed, making it easier to amortize the costs of design.
Putting a computer into the thing would be beyond an amateur with a soldering iron, but you are already discounting such people. For serious prototyping firms, “embedded Linux computer such as in a router” is likely a macro in their circuit design software.
You’d also get the best latency possible, as the PPS could be given a dedicated interrupt line.
The device would sell outside your own projects, as it would at once be both a “router with good time” and a “standalone Stratum 1 server with an extra Ethernet port”.
@Patrick Maupin
> And RoHS and WEEE. In the EU you have to have provisions to recycle stuff you build…
I think the job shop should be able to handle RoHS, after all this isn’t the space shuttle we are talking about here. And I suppose WEEE can be handled with a notice saying “When you are done mail it back to Eric’s house. But I’m not an expert.
For sure a pre built part that users can order and put in a box avoids the whole bureaucratic nightmare that is consumer product engineering, especially in low quantities where the NRE per unit is gigantic.
@Jessica Boxer:
> I think the job shop should be able to handle RoHS, after all this isn’t the space shuttle we are talking about here
Technically, you’re right. Legally, the paperwork might actually be worse. You have to track everything, and there is even a provision for liability for people who aren’t the principal offender. There’s apparently a potential for up to 3 years jail time, and we know that doesn’t happen for the space shuttle even when people die (but espionage is another issue).
According to this:
in Greece, if the relevant violations weren’t intentional, you only face up to 1 year of jail time. What a relief!
@Michael Deutschmann:
Yes, I think that’s an excellent product idea. But I also think that if someone built it, they might be able to maximize revenue by selling it for a lot more than $175, given that GPS time servers without routers go for more than that.
OTOH, if you had the balls to build a few thousand routers with GPS built-in, you could probably create a new product category and make a lot of money at a price-point under $200. Especially if it had WiFi and ran Linux and one or more of the Linux router distros.
@Michael Deutschmann:
That product idea (replace a Cisco SOHO-type router, with as much functionality as you can) might actually be worthy of a kickstarter project. Especially if it had a quadband cellular modem for backup.
>That product idea (replace a Cisco SOHO-type router, with as much functionality as you can) might actually be worthy of a kickstarter project. Especially if it had a quadband cellular modem for backup.
I agree. However, not this time. We all know what tends to happen to engineering projects that develop mission creep…
>>Every hacker has a friend with a soldering iron and a heat gun, right?
>Probably. Can’t think of any friends who’d take on making a hundred of these, though.
Bah! If you (or somebody else) is willing to come by and be entertaining for a while, I can crank these out. It’d even give me a reason to buy a new soldering station I’ve been looking at for a while.
100 units isn’t that big of a deal – maybe a solid afternoon.
>100 units isn’t that big of a deal – maybe a solid afternoon.
Much appreciated, Garrett. If I can’t find any other way than hand-building these, you’ll get a call.
> Not my constraint. Dave Täht chose the WND3700 router
Yeah, so he can run Cerowrt. Next question? Oh wait, right. It’s still a stupid idea.
Dave got his butt handed to him on the ntp list when he proposed this last year.
Reason: in steady state, ntp sends a smallish packet every 1024 seconds, and has a built-in mechanism to throw away packets with excessive delay. What are the chances he’s going to see an event?
If you want to work on bufferbloat, why not fix Linux, which will open-up the receive window beyond the needs of the bandwidth delay product? And if the other side is also Linux, it will have taken advantage of this.
(Linux will allow the socket buffers/windows grow to 4MB by default. And doesn’t implement TCP Vegas.)
See:
Or just gaze at your navel whilst running experiment 1d:
>Reason: in steady state, ntp sends a smallish packet every 1024 seconds, and has a built-in mechanism to throw away packets with excessive delay. What are the chances he’s going to see an event?
And that’s exactly why Dave’s monitoring software will use rawstats – the unfiltered propagation data.
>If you want to work on bufferbloat, why not fix Linux
If you had been paying attention upthread, you would know that Dave already did this.
If you don’t have to run the the WND3700 router, you could use a Raspberry Pi. Cases will be available by Summer.
I wonder if these are the same problem that creator of ColorHug had…
@esr
> We all know what tends to happen to engineering projects that develop mission creep…
Perhaps it could have a feature that, if it kicks over the the cellular modem, that it sends you an email?
:-)
@esr
> We all know what tends to happen to engineering projects that develop mission creep…
Shoot, I got that wrong. What I MEANT to say was: perhaps it could add a feature to allow it to be configured simply by sending it an email, adding a module to read a POP3 box.
But it isn’t funny if you screw it up. Ah well, back to the drawing board.
Here’s a USB mouse with its own vegetable patch:
@esr: Hey, what about the Gumstix GPSstix boards? They’re a bit expensive ($130 for the board + $169 for the Verdex Pro board you need to go with it), but after a quick glance around the related sites, they are based on the NEO 5Q GPS receiver which I think has the 1PPS line. The plus side is you can build the NTP server right into the device which runs an embedded Linux.
>[Gumstix] are based on the NEO 5Q GPS receiver which I think has the 1PPS line.
Proves nothing. All GPS chipsets have 1PPS on an output pin. The board-level integrator chooses whether to make that accessible. Googling on “Gumstix 1PPS” suggests they have not.
I agree Maupin’s extension of my idea would be a distraction full of mission creep. Probably the only creep that might be reasonable is bringing out any spare GPIO pins the CPU may have, alongside a replica of the PPS signal, into some sort of spare “geek port” on the box.
But my original idea: a box with just GPS, reprogrammable computer, two Ethernet ports and whatever else ESR and Dave need – is still reasonable. Those components alone are enough to make the thing useful to at least some people outside the project, a few of whom are currently paying an order of magnitude more for inferior solutions.
Also, have you tried negotiating with ZTI (or any other supplier of “good” USB GPS) for a better unit price considering you will commit to buying 100 of them?
>Also, have you tried negotiating with ZTI (or any other supplier of “good” USB GPS) for a better unit price considering you will commit to buying 100 of them?
One of our European devs is chasing a quantity-100 quote from ZTI.
> If you had been paying attention upthread, you would know that Dave already did this.
Dave did what, exactly?
> Proves nothing. All GPS chipsets have 1PPS on an output pin. The board-level integrator chooses whether to make that accessible. Googling on “Gumstix 1PPS” suggests they have not.
Looking at the top link generated by asking Google for “Gunstix 1PPS” shows:
“The 1pps from the ublox gps on the gpsstix board maps to gpio9.”
(and in-fact many links contain the same phrase)
>Dave did what, exactly?
Shepherded some Linux changes that will radically reduce TCP/IP packet latency.
>“The 1pps from the ublox gps on the gpsstix board maps to gpio9.”
Useless for anything but LEDs. In order for PPS to be visible for time service it needs to be carried to the serial-to-USB adaptor. This is what, in general, GPS vendors fail to do.
Since you’re not rational, be sure to point Dave to
or
or even
(The clock stretcher isn’t required with an M12M or M12+T as its PPS pulse width is 200ms.)
To be perfectly clear, I don’t think your approach of “cheap USB GPS receiver” doing 1PPS is going to be all that accurate. Read the links above for ‘why’.
As for your 60Hz hum nonsense:
> Useless for anything but LEDs. In order for PPS to be visible for time service it needs to be carried to the serial-to-USB adaptor.
LOL
You do know you can take an interrupt of the GPIO signals, right?
Seriously, Eric. What you *DON’T WANT* is the additional latency of the DCD (or DTR, or whatever) line going high, **AND** the latency of the USB protocol moving your 1PPS signal way over to the right.
>You do know you can take an interrupt of the GPIO signals, right?
If I’m willing to do custom hardware that wires the board to a gpio pin on the router, yes. For reasons I have patiently explained several times, this is not a feasible option.
> Shepherded some Linux changes that will radically reduce TCP/IP packet latency.
Not in the mainline kernel, (so far, they’re cerowt only!)
which was my request.
>Not in the mainline kernel, (so far, they’re cerowt only!)
Wrong.
> You do know you can take an interrupt of the GPIO signals, right?
Further on this, ARM (and ARM linux) supports a ‘fast irq’, which means the kernel can get to it quite a bit faster, and thus keep a (vastly) superior reference clock updated and chiming in-sync.
> Wrong
Let’s quote the first line together, ready?
A note: “I” did NOT get the new stuff related to bufferbloat pushed up into the upcoming Linux 3.3 kernel.
(emphasis mine)
Go back and read his comment again,. And the followup.
And consider yourself under a ban warning. Your trolling and your vicious, petty, carping attitude is wasting my time. You will be more polite in the future, or I will kick your ass off this blog.
> If I’m willing to do custom hardware that wires the board to a gpio pin on the router, yes. For reasons I have patiently explained several times, this is not a feasible option.
You’re either dense, or being silly.
The GPSstix, when attached to the proper gumstix board, the 1PPS signal on the GPStix is already connected to gpio9 on the gumstix. Through the 60-pin connector.
“No wires needed”, it’s snap-together, like LEGO. Cases are even available.
>The GPSstix, when attached to the proper gumstix board, the 1PPS signal on the GPStix is already connected to gpio9 on the gumstix. Through the 60-pin connector.
That’s useless unless we’re ready to abandon the WND3700 and port a routing distribution to the gumstix, which is presently out of scope as a solution. If I can’t see 1PPS on USB, the WNDR3700 can’t use it.
>You’re either dense, or being silly.
That is the last remark of that kind you are allowed before being banned. If you had a history of constructive behavior I would allow you great freedom to insult me (though much less to insult any one else). You don’t, so you have to earn that liberty one polite and thoughtful comment at a time.
@Larry Yelnick:
This is a concern. Which is why I think, if USB is required, it would probably be much better to use a high speed device. Additional variable latency can be reduced to 125 us, maybe lower depending on USB scheduling. You could probably send 3 interrupt packets/microframe to the GPS (if that was your main focus) to get it down to 42 us.
Obviously, you don’t really care about fixed latency, as long as you have an idea of how long it is.
But the gumstix looks considerably more expensive than the garmin, which is itself quite expensive. Given that all the other USB mouse vendors seem to mimic the Garmin, it’s surprising that none of them have an analogue to the 18x OEM LVC.
Or maybe they do. I’m curious if the typical cable with attached mini-DIN has any wires that don’t actually connect to the mini-DIN, or that connect to one of the pins marked unused on some versions of the mini-DIN.
In any case, the 18X OEM LVC (perhaps unlike the 18 OEM LVC it replaced) apparently doesn’t actually terminate in bare wires. It has a connector. According to Garmin, you can cut off the connector without voiding the warranty, because a lot of people aren’t going to use the connector, but it is there, and they tell you the mating connector, and you can use it. The connector is not designed for extreme use, so you’d want to put it on a board inside an enclosure with a strain relief on the cable..
>Additional variable latency can be reduced to 125 us, maybe lower depending on USB scheduling.
Yes, this is one of the optimization possibilities I’ve been keeping in my pocket – it should be possible to jack the USB polling rate up to 8MHz.
>I’m curious if the typical cable with attached mini-DIN has any wires that don’t actually connect to the mini-DIN, or that connect to one of the pins marked unused on some versions of the mini-DIN.
The BR355 has this exact problem. There’s an unconnected yellow pin 3 in the pinout at and there’s some reason to believe that this brought out 1PPS on older versions of the hardware.
>In any case, the 18X OEM LVC (perhaps unlike the 18 OEM LVC it replaced) apparently doesn’t actually terminate in bare wires.
I wondered what the X suffix meant. I think you may have explained it.
.
I looked at the 18x technical spec at and it is very clear they bring PPS out on pin 1 (yellow). It also says “The factory-installed connector will mate with JST right-angle PCB-mount connector (model BM06B-SRSS-TBT) or side-entry PCB-mount connector (model SM06B-SRSS-TB).” And that connector looks real familiar.
So, yes, I think you’re right. Mating a male JST connector to a PL2303 or some other bog-standard serial-to-USB converter and protecting that with shrink-wrap might very well do the trick. This is probably the most feasible custom-build suggestion anyone has made yet.
@ESR – Talking about a 1 microsecond clock is nice and all. But how will you marry this accurate clock to the packets that you’re trying to timestamp? OK, so you build your GPS 1PPS interrupt circuit but then exactly how are you going to get your OS to respond at 1 microsecond accuracy? There is so much “stuff” between the packet physically hitting the interface and the OS network stack and the application that hooks up to your 1PPS clock that I think it extremely unlikely that your time stamps are going to be anywhere near 1 microsecond accuracy. I say this based upon my experience in developing low latency networks which I have examined in lab conditions prior to deployment of products in the low latency financial trading world </Appeal to Authority). To come to grips with 1 microsecond network analysis you need to look at products from Corvil and TS Associates and others. Those chaps sell products that do the kind of analysis that the bufferbloat projects seems to be wanting to perform. To do the job at 1 microsecond and down into the nanosecond territory you need to have PTP (which is far more accurate than NTP). And then you need highly optimised software that is dedicated to the analysis. Both Corvil and TS make appliances that analyse traffic using either span ports but usually and more accurately using network taps. TS Associates even makes a PCI-X card that goes with their appliance. The card allows insertion of special "time stamp" calls into your production code on your production application servers and thus allows analysis of both network component timing and also application and OS software timing. This stuff is expensive and complex. I think throwing a GPS 1PPS time stamp at your WND3700 is fine except that you need to come to grips with all of the other "stuff" that's going on inside the WND3700 before you could ever expect to get analysis working at 1 microsecond. If WND3700 is to do what bufferbloat seems to be asking, then WND3700 would have to be dedicated to the job of analyzing "passing" traffic and would not be doing any firewalling, NATing, AAA, connection table populating, or pretty much anything else. If WND3700 is required to support all of the standard WND3700 network capabilities and functions then those processes are going to turn your 1 microsecond target into a bad joke. Unless I am somehow missing the point.
@esr:
I think you mean kHz…
BTW, the variable latency introduced by the USB is exacerbated and made hard to calibrate out because the 1 second PPS occurs in an even number of USB frames. So the clock drift of the computer’s (router’s) internal USB frame counter vs. UTC time will cause the latency to gradually change over a long period of time, and then snap back.
The effect of this will be perceived as a one frame (or perhaps one microframe on highspeed) slip every ‘n’ seconds, where ‘n’ depends on how far off the computer’s clock is. The closer the clock is to being correct, the less often the slip happens, and the worse it looks. For example, a clock that’s only off 1 ppm would gain or lose one second every 1M seconds, or gain or lose one
frame every 1000 seconds, which might interfere with your analysis.
One way to compensate for this would be to have an external CPLD or FPGA running a local clock and interpolating time samples before feeding to the USB. So, for example, if you interpolated 30 samples between the PPS, the 1K frame rate or 8K microframe rate is not divisible by 31 (one per second plus 30 in between), so you could force jitter associated with the clock drift to happen much faster. Essentially, this approach would let you calibrate your internal system’s RTC much more quickly for the same level of accuracy.
You could easily play tricks to distinguish the 1 pps pulse from the others. For example, the 1pps pulse could be on DSR, and the others could be on DCD or CTS.
>For example, a clock that’s only off 1 ppm would gain or lose one second every 1M seconds, or gain or lose one frame every 1000 seconds, which might interfere with your analysis.
I don’t understand the failure mode you’re trying to describe. Let me lay out how I think the clock will be conditioned in each second, then you can explain how you think this drift will affect it.
1. PPS interrupt comes in. We record it as ppstime = clock_gettime(CLOCK_REALTIME). ppstime is actual top of second plus an unknown latency which does not exceed 50 microseconds.
2. UTC time arrives as data over USB; we record this as utctime. Time of arrival is recorded as timetime = clock_gettime(CLOCK_REALTIME)
3. timetime-ppstime is the imputed latency of the time report with respect to PPS top of second, and will always be less than a second. We call adjtime(3) to slew the clock towards utc-time + (timetime – ppstime).
@patrioticduo:
Unless I misunderstood, the goal of using GPS was to keep the router’s RTC accurate within 1ms. I think this goal is eminently achievable.
@ Patrick Maupin – ah yes, you’re right. So I would suggest using RDTSC and rewrite PTP to dumb it down for WAN devices to achieve somewhere between 1mS and 100uS accuracy. But even at 1mS accuracy, it can be very challenging to get any OS to receive frames and time stamp them accurately when there is so much stuff in between the PHY and the clock. Getting into the PHY device driver is a specialized area. Does the WND3700 publish the source for the NIC device driver? Are the NIC’s sharing an interrupt (bad) or using an interrupt per NIC (good)? Are the NIC’s running on special ASIC’s with the CPU sitting on top? If yes, then those ASIC’s probably have built in queuing which is going to make it virtually impossible to time stamp traffic within the OS. These low cost home routers often use a four port switch ASIC with the OS sitting on top to handle all of the stuff from layer 4 to layer 7. The RDTSC is standard in all Intel CPU’s from the Pentium up and I have found it to be most useful for time stamping network traffic in my own endeavors at 100Mbps rates. So for 1mS accuracy RDTSC would be my choice along with code similar to PTP but dumbed down to 1mS. . NTP just isn’t good enough anymore but PTP is intended for LAN’s. So there is a sweet spot to be found for WAN synchronization that would sit right somewhere in the 1mS to 100uS range.
@patrioticduo:
Sure, but routers don’t use those. I don’t know about the uncertainty inherent in the router software and hardware, but I take it on faith that those who are using the equipment to their own satisfaction do.
@esr:
If you are certain that in your setup the variable latency is below 50 us, then the issue I described is probably not a problem.
What I was considering was a USB scenario with one interrupt per 1 ms frame. I guess you are scheduling more than one interrupt per frame if your latency is 50 us.
>If you are certain that in your setup the variable latency is below 50 us, then the issue I described is probably not a problem.
Dave Täht forwarded me measurements from a guy who has real-time-profiled PPS over USB under Windows. That’s what the measurements say.
I would think we would want to build using whatever serial -> USB converter part this guy used. I’m sure they aren’t all created equal…
>I would think we would want to build using whatever serial -> USB converter part this guy used. I’m sure they aren’t all created equal…
Good point.
That Sure electronics board mentioned by Beat Bolli definitely looks interesting. I wonder if you could convince them to update it? They only want $32 for the thing.
It uses the SiLabs USB -> serial converter, which may or may not be good enough from a latency standpoint.
Still, it has all the pieces your final solution would require except a case for the board, and comes in at an excellent price point.
Have you tried any GPS devices based on the u-blox chips? The modules themselves have USB out. Maybe it jitters less than the SiRF-based devices?
You can get a very simple test board with the u-blox device on it (full kit, connects via USB or RS232) from Mouser for sixty bucks.
Alternatively, it might be useful to contact them directly. They should know who you are — they describe using gpsd to connect to the device. They might be able to tell you about the latency in receiving messages over USB, and might be happy to send Dave a board to test with himself. Finally, I would expect their English to be pretty good — they’re in Switzerland…
Several other companies also seem to sell modules based on their chips (Antenova for one).
>You can get a very simple test board with the u-blox device on it (full kit, connects via USB or RS232) from Mouser for sixty bucks.
Link?
>Alternatively, it might be useful to contact them directly.
I’ll try it, but I need to know what board I’m talking about first.
The board I found at mouser is different (and cheaper — made by a third-party) than the one direct from u-blox:
That board’s manufacturer also makes a nice little board (but with PPS only going to an LED — blue wire required if we need it):
The chip/module manufacturer has a more expensive evaluation board:
If the USB out has sufficiently low jitter, you could probably use any mouse with this module. Otherwise, the module (or a small board like the gps-click for $50) might be the basis of a system we build with a lower latency USB interface (assuming we figure out the best USB chip for that).
Actually, I was wrong about needing a bluewire on the click — the schematic wasn’t drawn very well, but it appears that the 1pps also goes to the connector.
So, if the jitter on the u-blox part is low enough through USB, just find the right mouse using the right part and we’re done. If it’s not low enough, this looks like a good module to put down on a little board with a better USB to serial converter we can attach the PPS signal to.
I take off for the weekend and this happens…
I am encouraged by many of the comments on this thread, but I do feel the need to correct a few things that have gone by…
‘CBBD’ = Cosmic Background Bufferbloat Detector. I don’t know what CDDB is. What I started off calling the idea is a reference to this:
I’m certainly open to a better name…
I did not ‘get my butt handed to me on the ntp list’, in the end nobody was able to poke enough holes in the idea to dissuade me from trying it, and as eric noted, the intent was to analyze the noise that ntp’s various filters currently discards (e.g – the rawstat data), to see if there were any recognisable patterns across a large enough data sample.
I have had plenty of feedback from time geeks pointing to anomalies detected even within gigE switched networks.
That’s on the client side. On the server side, I’d hoped to also break down incoming ntp data when it was natted or not.
However to get a good baseline and error bars, (measuring the
inherent drift of the router’s own clock as another example) I really wanted reference sources on and beyond the edge of the network that had a chimer we could trust, which is how the concept of using gps to cross check the data came to be, and became the rathole of trying to find a gps that actually did do PPS in this so-called modern era.
Lets see, other stuff – as esr also noted there are some really nice debloating enhancements for ethernet to the upcoming linux 3.3 kernel – which I helped shepard by testing them in cerowrt. I gave full credit to the actual developers on the previous thread.
Despite my own desire to do this on the router testbed, having trustable edge time sources using any technology, with known performance characteristics and error bars, is an overall goal, so that other sorts of test data – not just ntp, but file transfers and the like, can be more directly compared. Imagine if you will, if you had 100 routers deployed on the edges of the internet, doing tests between each other….
So, perhaps long term something useful will come out of the rawstat data with or without a more accurate reference clock, I look forward to collecting more data, and if there was a way to get some solid chimers out ‘beyond the edge’ of the internet, it would be darn helpful.
Perhaps I’ll get to a ‘proof of concept’ or a ‘disproof of concept’ in the next month or two.
>‘CBBD’ = Cosmic Background Bufferbloat Detector. I don’t know what CDDB
My typo, probably.
“This ain’t no Mud Club. No CeeBeeGeeBee’s. I ain’t got time for that now!”
@esr and Dave Taht:
One thing that has been bugging me.
If the router has a reasonably stable hardware RTC, why do you need a non-jittered connection to the GPS?
Why can’t you assume that the temperature and voltage are relatively stable and the RTC won’t change too much over the course of a minute or an hour? Within some time period, you can determine a straight line correction based on two recent data points that are (a) far enough apart to minimize quantization error and (b) “appear” to be earlier (compared to the internal RTC) than any other nearby points.
So, my strategy would be:
1) Develop a straight line correction factor from the RTC to UTC for the internal RTC based on the perceived earliest two of recent multiple GPS measurements.
2) When an NTP packet comes in, don’t attempt to directly correlate it to the most recent GPS measurement. Instead, correlate it to the internal RTC, using the linear correction factor.
A more sophisticated version of this could use some filtering to allow for some RTC stability drift and to get a bit more accuracy, but the key is to either throw out all the GPS measurements that have been jittered way late, or to take the opposite approach, and just average all the samples (knowing that will give you more reported latency).
Now, having said all that, I think this technique would work better the more samples you had in a recent time period, which is why I was suggesting using hardware to report more than one PPS.
But to the extent that the data reported from the GPS consists of fixed + variable latency and to the extent that the internal RTC is stable, do you really need the PPS signal? And to the extent the PPS signal would be useful and you have to transport it over jittery USB, wouldn’t it likewise be useful to have a faster signal?
Oh, and one last question:
Can you recommend this router? What firmware should I use? Anything to be wary of, like manufacturer ships two versions and this one doesn’t have enough flash? (I try to follow similar “ask slashdot” discussions, but am never happy with all the answers :-)
@tz: .”
Exactly. Being able to effectively measure the edge is something that hasn’t been done yet. The more nodes participating the better the edge can be mapped.
Various projects are trying – notably the samknows people and the bismark folk, ICSI, and CAIDA.
In my own case I’m rather interested in the ledbat (bittorrent) work particularly inside a given providers network, across provider networks, and in the presence or absence of various queue management technolgies.
There are/were a lot of people in the IETF that held high hopes for an effective E2E ‘scavenging’ protocol in ledbat. The irony is that ledbat (v9) at least, does scavenge in drop tail, non-aqm, non-shaped networks, but is only marginally effective in RED managed ones. The newer AQM technologies we’re fiddling with are unknowns in this regard, but seem very effective for managing other sorts of traffic, so seem to be a net win… but I’d really like to get a grip on what uTP and ledbat do in a debufferbloated universe sometime soon.
Anyway, to get back to the ntp + gpsd issue, I’ve read through the backlog of postings here, and it appears that nothing of the shelf is going to work, and that a basement build party might be required…
and frankly I’d like to get to processing my backlog of data, and
maybe getting others to start collecting it, to see if any trends can
indeed be determined with or without a decent reference chimer.
Collecting rawstats is easy for a daily period. Merely add the
following lines to your /etc/ntp.conf
statsdir /tmp/ntpstats/ # or some other place you can save to
statistics loopstats peerstats clockstats rawstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable
filegen rawstats file rawstats type day enable
@Patrick Maupin: I can recomend routers that use the chipset in the netgear wndr3700v2 and wndr3800 – there are something like 36+ products from various vendors that use that chipset (atheros ag71xx (wired chips) and ath9k (wireless)), all of which are supported by openwrt.
I can recomend the nanostation M5 for long distance links. I’ve heard some good things about the buffalo BZR-300 series.
So far as I know this chipset has the only 100% open source drivers chipset out there at present (although broadcom has been getting better of late), and even the native firmware is generally pretty good. (the buffalo router actually ships with dd-wrt, btw)
That leaves 30+ routers based on this chipset that I have not had a chance to play with directly, and to keep my life simple, the 3800 is the way the cerowrt project is going to stay for a while longer.
As to your RTC question – there is no RTC, battery backed up or otherwise, in most consumer routers. This leads to lovely chicken-egg problems with ntp and dnssec, as one example. As to evaluating the stability of the onboard crystal and OS’s timekeeping ability, the latter is dependent on the OS version somewhat, and the workload somewhat, the former dependent on environmental conditions and build quality – and in either case, how do you effectively measure that stuff without a reference time source?
I’d LOVE to get down to where we merely had known error bars.
>‘CBBD’ = Cosmic Background Bufferbloat Detector.
You don’t want to confuse this with the Blufferboat – it sails into international waters with high-stakes poker players aboard.
….OUCH!…OK,OK! I’ll stop!…..
@Dave Taht:
Understood. Really just need a solid timing source. For example, the USB chip has to output at 1KHz frame rate — is that available internally? (And BTW, the USB chip damn well better be pretty close to correct frequency or it just plain Won’t Work. USB spec is +/- 500 ppm.) So one would assume, if frame info can be read from USB, it will be pretty accurate. This can probably even be done by seeing where its DMA pointers are if it doesn’t have a special register for that.
BTW, I think I found the absolute easiest build for an uncased product that will connect the Mediatek GPS chip (if that works OK) to the high-speed FTDI chip (if that works OK) and connect 1PPS to a modem control signal:
You’d need to solder a 1×6 0.1″ header strip to the Fastrax and then plug the right connectors from the cable on to it.
Total cost for a single would be $57 plus the header plus the plastic case to put the Fastrax in and a cable-tie to provide some strain relief so the cat doesn’t destroy the Fastrax when it knocks it off the windowsill.
>Really just need a solid timing source. For example, the USB chip has to output at 1KHz frame rate — is that available internally?
Wouldn’t solve the problem. We don’t need just a reliable ticktock, we need a common timebase for all the detector nodes. That’s the real utility of GPS here, not the absolute precision.
Wasn’t CDDB a service that you could use to try to figure out the author, cd name and track name of your music cds based on track number and length?
@esr:
But you’re claiming that the problem with the available GPS mouses is latency. There are two kinds of latency — fixed latency can be measured and easily compensated for (for a given chip). Variable latency can be compensated for with a reliable ticktock. In other words, I am not questioning the decision to use GPS, but I am challenging the conclusion that 1 PPS must be available for it to be useful for your scenario.
For example, in this comment you describe a method of, essentially, measuring the most recent GPS tick against the most recent NTP packet.
But if you have a stable local clock with high enough resolution, if that’s all you’re doing, you’re throwing away most of the data that’s available to you, and it’s no wonder that jitter on incoming GPS data gives you fits. I’m suggesting you use lots of samples of GPS data to give you an equation that will convert whatever the local stable ticktock says into UTC. Then when the NTP packet comes in, you do the math and know how far out it is.
It’s certainly possible I’m missing something and this won’t work. But I haven’t seen it discussed in depth, and, in fact, your example of use says that the history of GPS samples isn’t being used.
>I am challenging the conclusion that 1 PPS must be available for it to be useful for your scenario.
I see. I’ll look into this.
CD Database. It’s what allows iTunes to automatically recognize your CD when you rip it, filling in a title, cover art and a tracklist. :)
I hooked up and configured one of my Venus 628s to just generate GPRMC, synced to UTC, 115200 Baud – the same family as the Phidget and the $30-$40 off the shelf devices.
With this the DTR line flips 2-3.5 mS after the PPS as visible on my scope. I didn’t try root nice -20. C would also be faster. The start bit of the $ is 1 millisecond after the pulse edge. I was doing an update in the background so the system wasn’t idle.
#!/usr/bin/python
import sys
import os
import time
import fcntl
import termios
import struct
if len(sys.argv) > 1 :
try:
ser = open( sys.argv[1], “rw”)
except:
print “Usage: dtrpps))
tabs to 4 spaces:
#!/usr/bin/python
import sys
import os
import time
import fcntl
import termios
import struct
if len(sys.argv) > 1 :
try:
ser = open( sys.argv[1], “rw”)
except:
print “Usage: devgpsrc.py [/dev/if/not/stdin]”
exit(1)
nodeid = sys.argv[1][-4]
ms = str(int(time.time()*1000%100000))
else:
ser = sys.stdin
nodeid = str(os.getpid())
ms = str(int(time.time()*1000%100000))
print ser, ser.fileno(), termios.TIOCMGET
print dat
ms = str(int(time.time()*1000%100000))
ios = ios ^ termios.TIOCM_DTR
fcntl.ioctl(ser.fileno(), termios.TIOCMSET, struct.pack(“i”,ios))
sigh. Try underscores for spaces
#!/usr/bin/python
import sys
import os
import time
import fcntl
import termios
import struct
if len(sys.argv) > 1 :
____try:
________ser = open( sys.argv[1], “rw”)
____except:
________print “Usage: dtrtest))
@esr: A while back, I ran into the same problem as tz did when I tried to send some primitive ASCII graphics. Doesn’t WordPress have some setting that will preserve spaces? How many bytes does the damn thing think it is saving on the hard drive, anyway?
>Doesn’t WordPress have some setting that will preserve spaces?
I did this with <pre lang=”C”>
It works with most other major programming languages as well, preserving spaces and doing language-apprpriate syntax highlighting.
GPSD alone with NO MODIFICATIONS (at least under Fedora 16 with all the updates) will sync using just the NMEA stream from a SkyTraq 624 or 628 in PPS sync mode (I’ve verified both can be put into sync to UTC mode) running with the full set of sentences at 115.2 kBaud consistently 14 milliseconds late. That is my DTR switcher will show the PPS $ at XX.9836 seconds, +/- one millisecond itself 3 milliseconds late per my oscilloscope.
I found it due to an annoyance where the generic FTDI USB ID is used for one of the GPS units in udev – so I didn’t intend to start it, but gpsd started, grabbed the serial port and the clock synced. Strangely, ntp (via ntpd or ntpdate) sets this so the PPS occurs at xx.540. I fixed the udev, but then ran it manually, and it synced right to the same microsecond offset again.
Maybe this could be of interest to you:
The EM-406A GPS Module is used as a time base in this project. It has a 1PPS output (Pin 6) and is available for 40$ to 60$.
Datasheet:
The aim of the project is to build a distributed satellite ground station network. Projects members have given a talk at 28C3:
* Presentation:
* Video:
And they made a call to arms :-)
>the EM-406A GPS Module is used as a time base in this project.
Clever idea but it looks like it needs the West German time radio service to work :-)
If ever you really thought you knew what time it was, let me refer you to this posting of hal murray’s
I have some additional graphs from hal showing time as reported from four different breeds of gps varying by 100s-1000s of ms over roughly – but not quite – a 24 hour pattern.
@Dave Taht:
Well, that’s just… silly.
With a TCXO, you should be able to do a good job of figuring out which of those are wobbling and which are steady. Then it’s a matter of trying to figure out which of the steady ones are accurate.
Even though the Motorola OnCore might not be the right thing for the build of a hundred units, they might be the very thing for setting up a test lab. Anything that is used in cell stations in real life with no complaints is probably pretty good, because the cells absolutely depend on having the right time to keep from interfering with each other.
Adafruit has introduced a new GPS module / breakout board:
GPS chip spec sheet:
Price: $39 ()
The adafruit one looks sweet. Intelligently designed. Obviously not their first effort.
But maybe not for timing. Where’s the PPS pin?
It says that it has 10Hz updates. Maybe it does a better job of not wobbling the RS232 data.
At least for initial development, if we find a cheap enough device, maybe we just add a wire. That will tell us whether the latency through the USB is good enough.
This dongle is 23 bucks:
Here’s a guy who used it with gpsd (not for timing, though):
Here’s a guy who took it apart because it was the cheapest way to get a GPS module:
Looking at his pictures, it doesn’t look like it would be hard to add a wire. The name of the GPS module is fairly clearly visible as a GlobalSat EB-3531. A quick check shows that that module does in fact support PPS and that the module vendor actually appears to have customer support:
Here’s the module datasheet:
THe module shows as discontinued in their module comparison, which may be why devices using it are so cheap:
If you want, I’d certainly be willing to buy a couple and modify them for testing.
USGLobalSat also makes their own USB dongle which looks to be pretty decent build quality for not much more. Should probably be easy to add a wire to that, too.
>If you want, I’d certainly be willing to buy a couple and modify them for testing.
You’re the hardware designer. My job is the to keep the requirements in view of both of us and do the software integration. So, you get to choose the hardware path. But I don’t see that this gets us much of anywhere, since we can’t guaranteed that our production version will use the same USB adapter with the same latency.
We have now passed the point at which I can track all the branches in the decision tree through blog comments. Please get set up on gitorious so we can whiteboard some design possibilities on the thumbgps project wiki. I’ll bring in Hal Murray and some other interested parties.
The decision tree passed comprehensibility for me as well about 100 messages back.
I like ‘thumbgps’ as a name, too.
If anyone wants a specialized mailing list for this, I can set one up fast.
I HAD started a repo for the cbbd stuff a while back, and got stuck on representing ntp’s time format before I stopped. While software does intersect with hardware, and I’m rather interested in how the firmware is programmed on these puppies, I somehow doubt that that firmware is readily available?
>The decision tree passed comprehensibility for me as well about 100 messages back.
Please register on gitorious. That will allow you to erite on the thumbgps project wiki, and I’ll give you commit privs on the main repo.
Mailing list – That’s probably a good idea. You, I, Patrick Maupin and Hal Murray should be on it to start with.
thumbgps-devel created. I am generally reluctant to arbitrarily sign people up for a given mailing list without their express consent, so those interested in pursuing this idea in a format easier to deal with (e.g: email) please sign up at:
(apologies for the currently obsolete ssl key)
A full uart emulation chip for usb is just slightly
more expensive than a 4 wire one.
Which I think sort of begins to explain why the PPS devices have been vanishing from the planet.
It does strike me as kind of odd to go on emulating an obsolete standard (rs-232) for so long. USB2 has several modes of ‘native’ operation that seem more suitable… interrupts, and isochronous mode. ‘Interrupts’ are interesting in that they aren’t actually interrupts, but are polled for.
isochronous mode supports data streams of up to 480Mbit, and individual ‘packets’ of up to 1024 bytes.
Heh. It would be nice to get a real usb id for gps…
I replied to esr and Dave on new mailing list.
You can do it (assuming you compile GPSD with PPS on CTS – this should be a command line or config file option) with the following three main parts: (Magnetic Mount active antenna with long cable) (SkyTraq breakout) (FTDI 3.3v Basic)
$13, 50, and 15 respectively for $78, Quantity 1. 100+ drops 20%. You would need a simple single-sided board with some headers to attach – all at 0.1″ centering, very easy to do, e.g. and – the PPS goes to CTS, the FTDI provides 3.3v @ 50mA – just enough for the GPS, and then just do TX – RXI and RX to TXO. Then some kind of packaging, the big problem is the hole for the mini-USB. You might have to configure the SkyTraq the first time, and AGPS helps (I have code for that – ftp download, then serial packetized upload with acks).
Even if you get a $30 GPS unit you would have to break it open and do some soldering under a stereo microscope, assuming you can attach the PPS to DCD or CTS or another pin easily.
For a highly accurate and inexpensive GPS time server solution, consider the Garmin 18X LVC OEM device which provides RS-232 NMEA and Proprietary messages plus a PPS output for 60 to 70 dollars US. The device comes with pigtail wiring that has Power+, Power-, Signal Gnd, RS-232 Rx, RS-232 Tx, and PPS. The device readily integrates with Linux 2.6+ kernels and I have set up many time servers using linux utilities gpsd, ntpd, and ptpd on commodity hardware machines. ntpq reports that the clock jitter is well under 20 microseconds and is usually within +/- 3 or 4 microseconds. A decent time server can be constructed for less than $200 US.
>For a highly accurate and inexpensive GPS time server solution, consider the Garmin 18X LVC OEM device
A couple of my senior devs use this GPS for time service; we know it well. It is indeed a good PPS source, but it has three serious drawbacks for the deployment I have in mind.
1. Low sensitivity – very poor indoor performance. The 18 has the design heritage of Garmin’s marine-GPS line; it’s good enough on the water, with an unobstructed skyview, but performance degrades rapidly in weak-signal environments.
2. Won’t talk to the USB port on a modern commodity router. Not even a conventional serial-to-USB adapter would do the trick; we’d need one of those, plus a hand-bodged cable adapter for Garmin’s OEM connector.
3. High cost per unit. What we’re going to deploy, the Navisys Macx-1, is less than half as expensive as the bare Garmin 18, let alone what the custom adapter hardware would cost.
Ooh, here’s another very interesting use this could enable!:
Google is apparently using a combination of GPS high-resolution timing and locally installed atomic clocks (!) as part of a novel approach to a distributed, consistent database called “Spanner”.
I designed a board that has the Trimble Copernicus II on it. Used the dual-uart from ftdi to connect the two serial outputs from the GPS chip (one for raw trimble, the other for NMEA) — usb. Has an RF in and RF out (the second is for a follow-on GPS device in our system, and has a dc-block on it to allow the second device to receive the RF signal for it’s own use). I use my board on an embedded linux board as the system ntp server. I have gpsd and chrony working together. Cost of the boards, with parts, is about $75. And that’s just for 2 boards. Works like a charm.
A little late and still somewhat hand built, but see links on for complete Raspbery pi time server:
off the shelf parts: Adafruit GPS board $40, Raspberry Pi $35, 4GB SD card with software,
custom adapter board (I had 10 made for $60), micro usb power supply, case.
Ethernet built in, whole thing size of a pack of cigarettes. Do have to solder at least 10 pins, but no wires at all.
Darrel AK6I
This electronic parts search engine might help someone find something useful.
Component Search
It is a startup company based in Orange County.
Good luck. | http://0-esr.ibiblio.org.librus.hccs.edu/?p=4171 | CC-MAIN-2017-47 | refinedweb | 16,900 | 70.23 |
10 reasons to use Groovy in 2019
José Coelho
Updated on
・5 min read
Why Groovy?
It's been about a year since I joined my company's DevOps team and one of the main tools we use is Jenkins which gets along great with Groovy. I mostly use Groovy for orchestrating pipelines and automating some boring tasks.
Groovy is a powerful language for the Java platform, it integrates smoothly with any Java program. It's also a great scripting language with its powerful and easy to learn syntax.
[Groovy is] aimed at improving developer productivity thanks to a concise, familiar and easy to learn
syntax. It integrates smoothly with any Java program, and immediately delivers to your application
powerful features, including scripting capabilities, Domain-Specific Language authoring,
runtime and compile-time meta-programming and functional programming.
So here are 10 features that I've learned in the past year that made me love Groovy:
1. Simplicity
Coming from a Java back-end development team, learning Groovy was a breeze for me. It is build on top of Java standard libraries, providing extra features. Most of them make programming much simpler.
1.1 Declaring Lists/Maps
Groovy is an optionally typed language, you can use the
def keyword to declare variables. For example declaring lists or maps is as simple as:
def myList = [] def myMap = [:]
1.2 Iterating over Lists/Maps
And iterating over them is incredibly easy and readable using Closures:
myList.each {element -> println element } myMap.each {entry -> println "User: $entry.user | Email: $entry.email" }
2. String Interpolation
[String] Interpolation is the act of replacing a placeholder in the string with its value upon evaluation of the string.
In Groovy, placeholders are surrounded by
${}or prefixed with
$ for dotted expressions.
In the previous snippet we can see an example of string interpolation. But here is another one:
try{ throw new Exception() }catch(Exception e){ println "Error during operation. Cause: ${e}" }
3. Casting
Groovy makes casting very easy and readable with the
as operator. To use this operand the casted class must implement the
asType()method. This already happens for standard classes like lists, enumerators, etc.
For example:
enum Employee { MIKE, HANNA } String name = "JOHN" try{ name as Employee }catch(Exception e){ println "Could not find employee ${name}. Cause: ${e}" }
4. Json to Classes
I work a lot with Web Services with Json responses so inevitably I have had to map responses to Groovy classes. This comes out of the box with Groovy and it's extremely easy, just pass a Json through the class constructor.
String response = '{name:"John", position: "Developer", age: 32}' // Json response to map def employeeMap = new JsonSlurper().parseText(response) // Consider an Employee class with the attributes name, position and age Employee employee = new Employee(employeeMap)
That's it. We just built an employee object from a Json string.
The other way around is just as simple:
def json = JsonOutput.toJson(employee) // json == '{name:"John", position: "Developer", age: 32}'
5. Assertions
Groovy has the same
assert statement as Java, but way more powerful - hence it's name - Power Assertions.
The difference being its output in case the assertions resolves to
false. For example:
def contacts = ['John', 'Anna'] assert contacts.isEmpty() //Output: //ERROR org.codehaus.groovy.runtime.powerassert.PowerAssetionError: //assert contacts.isEmpty() // | | // | false // [John, Anna]
This allows you to very easily understand what has made the assertion fail.
6. Defining variables
Groovy is optionally type, this means that you can define a variable with its type or simply use the keyword
def. This applies as well when declaring List or Maps, their types are optional. For example:
String name int age def address List friends = ['John', 'Anna'] Map family = ['Mother':'Mary', 'Father':'Joseph'] def getFamilyMember("Mother"){ ... }
For those of you who know Javascript, this is similar to the keyword
var.
This gives you incredible flexibility, however be cautious when using it. It might make it harder on your team or someone else using your code to read it and understand what is expected as input or output.
7. Hashing
If you've ever used Java, you probably know how verbose it is to hash a string - unless you're using a third-party library.
Groovy 2.5 brings us some useful methods to the
String class. Calculating hashes is as simple as calling a method on a String. Groovy makes it simple:
def password = "thisIsMyPassword" def md5 = password.md5() def sha256 = password.sha256() //For other algorithms use digest() method def sha1 = password.digest('SHA-1') ...
8. Operators
Groovy supports the most common operators found in other languages. But that's not enough the are some more interesting operators Groovy provides. Here are a few:
Elvis operator
This is a shorter version of the ternary operator. This is very useful, for example, when the condition could evaluate to null.
// Instead of this def user = person.name ? person.name : 'Guest' // Use this def user = person.name ?: 'Guest'
Safe navigation operator
Another operator that can be used to check if a variable is null is the Safe Navigation Operator.
def user = person?.name // user == null
Use this operator when you want to avoid
NullPointerExceptions. In case the object you're accessing is null, this operator will return a
null instead of throwing a
NullPointerException.
Spread Operator
The spread operator (
.*) is used to execute an action on all items of a Collection, instead of using a loop or a closure as we've seen before. For example:
def numbers = [1,2,3,4,5,6] *numbers // == [1,2,3,4,5,6] def people = [ new Person(name: 'John', age: '25'), new Person(name: 'Anna', age: '21') ] def friends = people*.name // friends = ['John', 'Anna']
9. Traits
Traits are a structural construct of the language which allows:
-
composition of behaviors
-
runtime implementation of interfaces
-
behavior overriding
-
compatibility with static type checking/compilation
I like to think of traits as interfaces where you can actually implements methods. It's very useful when you have a very complex and structured application and you want to keep things clean.
It's definetly something I've missed in the early Java.
Here's an example:
trait Sociable { void greet() { println "Hello!" } } class Person implements Sociable {} Person p = new Person() p.greet() // Hello!
10. Regular Expressions
Groovy natively supports regular expressions and it's quite simple. It has 3 operators for regular expressions:
~this is the pattern operator, its the simple way to create an instance of
java.util.regex.Pattern:
def pattern = ~"^abc{2}\d" // pattern instanceof Pattern == true
=~this is the find operator which will look for a pattern in a string and returns a
Matcher:
def pattern = ~"abc{2}\d" def found = "abcc5" =~ pattern // found instanceof Matcher == true
- and finally the match operator
==~which returns
trueif the string is a strict match of the regex:
def found = "abcc5" ==~ pattern // found == true
Conclusion
Groovy feels like a breath of fresh air if you've been programming in Java, or other OOP languages, for years.
It makes things so much easier and simple and way less verbose. Plus the extra features like scripting and Domain Specific Language capabilities, push Groovy to a new level and gives it this new fresh look that's missing for older languages.
I hope you found this article interesting and let me know in the comments below if you have any experience with Groovy and how do you use it. :)
Programming is Hard
Indeed it is. Hollywood films often portray programmers as fast-typing computer wizards who can "hack" into anything. We all know that this can never be farther from the truth. Here is an article that discusses what goes into the many aspects of "real" programming.
I really like it, it's always my first choose, it does your life easier but in high throughput and performance API could be slow. I migrated some API on Grails to Java and Go and the results were that the new APIs perform 3x better than Grails API.
Also it has a lot of magic that could give you some troubles if you need very low level specific things.
So, if you need lunch quickly or you are under 5k ~ 10K RPM or you don't need a high performance API or you can use a lenguage made to do easier your life you can go with Grails.
PS: I know 300K RPM Grails API that works great, but it has a lot of SRE on it.
This is why Micronaut comes in hand where you can still use Groovy and GORM but get higher speeds. At least that's what I understood from reading about Micronaut, never actually tested it myself.
What is your view on Kotlin compared to Groovy?
(Background: In my team we have an active debate going on for the last months about switching to Kotlin. We have been using Groovy for some years but feel that the language is slowly 'dying' and getting less interest in favour of Kotlin.)
Hey Hidde,
I think they're both very interesting languages, and of course, once a new language comes along its "competitors" start losing some track in the community.
In the end I believe they will coexist, but be narrowed down to their specific purposes/applications. They both have their own strengths.
What are you guys using Groovy for? In your place I would start new projects in Kotlin, just to see how it goes, but not migrate old ones.
Yes, we have no trouble using Groovy for multiple projects. It is a great language.
Whad I personally miss the most in Groovy (versus Kotlin) is the type safety of all kinds of closures. And nullability.
For the more scripting side of things, Groovy might remain a better option than Kotlin in some cases.
Groovy has been evolving for years and Groovy 3.0.0 will be released with lots of new features this year, so Groovy is not 'dying' and will not be die.
Thank you for the great work you're doing to groovy 3.0, I still believe that groovy should become one day fully static compile in order to live longer, maybe it's in the roadmap of 4.0, and it's ok to deprecate things that prevent that from happening.
Which company do you work?
Thanks a lot for share with us
Great article José Coelho.
Good tips for whom's starting with Groovy.
Other interesting article would be Groovy vs Java performance comparison.
I started using groovy in 2011, all these features were there, so why we should use it in 2019, does it have anything new? I would rather use kotlin now | https://dev.to/jcoelho/10-reasons-to-use-groovy-in-2019-431f | CC-MAIN-2019-43 | refinedweb | 1,771 | 64.61 |
Hashtable and Dictionary both of them are used to maintain the key, value pair only there is some basic difference that makes us to opt, any of this 2 based on the situations.
Differences between Hashtable and Dictionary.
1. Declaration of Hashtable and dictionary;
Dictionary:
Dictionary<int, string> dict = new Dictionary<int, string>();
Hashtable :
Hashtable hst = new Hashtable();
2. Namespace for Hashtable and dictionary;
Dictionary : It is generic type of collection. Its belongs to the
using System.Collections.Generic;
Collection family.
Hashtable : it is non generic type of collection so it belongs to:
using System.Collections; collection family.
3. Boxing and Unboxing
Dictionary : At the time of declaration itself we defined the type
Dictionary<int, string>
That’s why no need of any type casting; at time of retrieval. It will take only the same data type at time of assignment.
dict.Add(1, "1");
Hashtable : It can store any type of datatype as there is no declaring.
Hashtable hst = new Hashtable();
You can assign any type of data in hash table like below;
hst.Add("1", "1");
hst.Add(1, "1");
At the time getting the value you need to do the unboxing as
int num = (int)oh;
string str = (string)hst[oh];
4. Performance
Dictionary is faster than hash table as there is no need to do the boxing and unBoxing.
Please see the below code snippet and result.
static void Main(string[] args)
{
Console.WriteLine(" 1) Start : - " + DateTime.Now);
Dictionary<int, string> dict = new Dictionary<int, string>();
dict.Add(1, "1");
for (int i = 0; i <= 1000000; i++)
{
dict.Add(i, "value " + i);
}
foreach (var o in dict)
{
int num = o.Key;
string str = o.Value;
}
Console.WriteLine("End : - " + DateTime.Now);
Console.WriteLine(" 2) Start : - " + DateTime.Now);
Hashtable hst = new Hashtable();
hst.Add("1", "1");
hst.Add(1, "1");
for (int j = 0; j <= 1000000; j++)
{
hst.Add(j, "value " + j);
}
foreach (var oh in hst.Keys )
{
int num = (int)oh;
string str = (string)hst[oh];
}
Console.WriteLine("End : - " + DateTime.Now);
}
Thread safety point is missing | http://www.getmscode.com/2015/03/c-hashtable-vs-dictionary.html | CC-MAIN-2017-51 | refinedweb | 339 | 69.68 |
This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.
I really think that the best way to do this is for the compiler to produce .o files that somehow contain the gcc library requirements.
Then any gcc driver used to link the files together should collect these libraries and add them to the linker command line.
...
Remember that this is only for gcc's internal libraries,
Actually, I think it should be supported for user libraries too. Remember that bad old days when you had to specify -lm if you used a math function? Well, we still have the same problem, but worse. For C, C++, and Java libraries (which are already in a global namespace), prorammers are requires to know the magic -l incantation to add. This is ridiculous.
For Java, if class A references class B, the compiler needs to be able to find class B when compling class A. So the compiler needs to remember the library where it found B, and include that information in A.o.
For C and C++, a header file foo.h could include a __Pragma that would enable the linker to find the library containing foo.o.
I'm talking about defaults - if should be possible to specify to linker that it should use some other library that it was compiled against - but most users should not need to add -l or -L flags. -- --Per Bothner per@bothner.com | http://gcc.gnu.org/ml/gcc/2004-06/msg00116.html | crawl-002 | refinedweb | 246 | 73.37 |
Last year flussence++ wrote a nice post about writing XMMS bindings for Perl 6 using the Native Call Interface. It has improved a bit since then, (at least NCI, I don’t know about XMMS), so let’s show it off a bit.
To run the examples below you need a NativeCall module installed. Then add
use NativeCall; at the top of the file.
Previously, we were carefully writing all the C subs we needed to use and then usually writing some Perl 6 class which wrapped it in a nice, familiar interface. That doesn’t change much, except that now a class is not really an interface for some C-level data structure. Thanks to the new metamodel we can now make our class to actually be a C-level data structure, at least under the hood. Consider a class representing a connection to Music Player Daemon:
class Connection is repr('CPointer') { sub mpd_connection_new(Str $host, Int $port) returns Connection is native('libmpdclient.so') {} sub mpd_connection_free() is native('libmpdclient.so') {} method new(Str $host, Int $port) { self.bless(mpd_connection_new($host, $port)) } method DESTROY { mpd_connection_free(self) } }
The first line does not necesarilly look familiar. The
is repr trait tells the compiler that the internal representation of the class
Connection is a C pointer. It still is a fully functional Perl 6 type, which we can use in method signatures or wherever (as seen in the lines below).
We then declare some native fuctions we’re going to use. It’s quite convenient to put them inside the class body, so they don’t pollute the namespace and don’t confuse the user. What we are really exposing here is the
new method, which uses
bless to set the object’s internal representation to what
mpd_connection_new has returned. From now on our object is a Perl 6 level object, while under the hood being a mere C pointer. In method
DESTROY we just pass
self to another native function,
mpd_connection_free, without the need to unbox it or whatever. The
NativeCall module will just extract its internal representation and pass it around. Ain’t that neat?
Let’s see some bigger example. We’ll use
taglib library to extract the metadata about some music files lying around. Let’s see the
Tag class first:
class Tag is repr('CPointer') { sub taglib_tag_title(Tag) returns Str is native('libtag_c.so') {} sub taglib_tag_artist(Tag) returns Str is native('libtag_c.so') {} sub taglib_tag_album(Tag) returns Str is native('libtag_c.so') {} sub taglib_tag_genre(Tag) returns Str is native('libtag_c.so') {} sub taglib_tag_year(Tag) returns Int is native('libtag_c.so') {} sub taglib_tag_track(Tag) returns Int is native('libtag_c.so') {} sub taglib_tag_free_strings(Tag) is native('libtag_c.so') {} method title { taglib_tag_title(self) } method artist { taglib_tag_artist(self) } method album { taglib_tag_album(self) } method genre { taglib_tag_genre(self) } method year { taglib_tag_year(self) } method track { taglib_tag_track(self) } method free { taglib_tag_free_strings(self) } }
That one is pretty boring: plenty of native functions, and plenty of methods being exactly the same things. You may have noticed the lack of
new: how are we going to get an object and read our precious tags? In
taglib, the actual
Tag object is obtained from a
Tag_File object first. Why didn’t we implement it first? Well, it’s going to have a method returning the
Tag object shown above, so it was convenient to declare it first.
class TagFile is repr('CPointer') { sub taglib_file_new(Str) returns TagFile is native('libtag_c.so') {} sub taglib_file_free(TagFile) is native('libtag_c.so') {} sub taglib_file_tag(TagFile) returns Tag is native('libtag_c.so') {} sub taglib_file_is_valid(TagFile) returns Int is native('libtag_c.so') {} method new(Str $filename) { unless $filename.IO.e { die "File '$filename' not found" } my $self = self.bless(taglib_file_new($filename)); unless taglib_file_is_valid($self) { taglib_file_free(self); die "'$filename' is invalid" } return $self; } method tag { taglib_file_tag(self) } method free { taglib_file_free(self) } }
Note how we use native functions in
new to check for exceptional situations and react in an appropriately Perl 6 way. Now we only have to write a simple MAIN before we can test it on our favourite music files.
sub MAIN($filename) { my $file = TagFile.new($filename); my $tag = $file.tag; say 'Artist: ', $tag.artist; say 'Title: ', $tag.title; say 'Album: ', $tag.album; say 'Year: ', $tag.year; $tag.free; $file.free; }
Live demo! Everyone loves live demos.
$ perl6 taginfo.pl some-track.mp3 Artist: Diablo Swing Orchestra Title: Balrog Boogie Album: The Butcher's Ballroom Year: 2009
Works like a charm. I promise I’ll wrap it up in some nice
Audio::Tag module and release it on Github shortly.
Of course there’s more to do with NativeCall than just passing raw pointers around. You could, for example, declare it as a
repr('CStruct') and access the
struct field directly, as you would in good, old C. This is only partly implemented as for now though, but that shouldn’t stop you from experimenting and seeing what you can do before Christmas. Happy hacking!
De |
Trying to run the first code snippet using a current rakudo from git, I get:
===SORRY!===
No applicable candidates found to dispatch to for ‘trait_mod:’. Available candidates are:
:(Attribute $attr, Any $rw)
:(Attribute $attr, Any $readonly)
:(Attribute $attr, Any $box_target)
:(Routine $r, Any $rw)
:(Routine $r, Any $default)
:(Routine $r, Any $info, Any $inlinable)
:(Parameter $param, Any $readonly)
:(Parameter $param, Any $rw)
:(Parameter $param, Any $copy)
:(Parameter $param, Any $required)
:(Routine $r, Any $export)
:(Routine $r, Any $hidden_from_backtrace)
:(Mu $type, Any $rw)
:(Mu $type, Any $size, Any $nativesize)
:(Mu $type, Any $export)
:(Mu $docee, Any $doc, Any $docs)
:(Mu $docee, Any $doc, Any $docs)
:(Mu $child, Mu $parent)
December 24, 2011 at 10:20 am |
Yeah, I forgot to mention you need to import the NativeCall module to do that (it’ll probably live in core Rakudo one day). I’ll update the post to mention that, thanks! | http://perl6advent.wordpress.com/2011/12/21/native-libraries-native-objects/?like=1&source=post_flair&_wpnonce=b0679ce6ea | CC-MAIN-2013-20 | refinedweb | 969 | 56.55 |
ConstraintLayout Tutorial for Android: Complex Layouts
In this ConstraintLayout tutorial, you’ll learn how to dynamically position UI elements in relation to other elements on the screen and to animate your views.
Version
- Kotlin 1.3, Android 8.1, Android Studio 3
ConstraintLayout is a layout on Android that gives you adaptable and flexible ways to create views for your apps.
ConstraintLayout, which is now the default layout in Android Studio, gives you many ways to place objects. You can constrain them to their container, to each other or to guidelines. This allows you to create large, complex, dynamic and responsive views in a flat hierarchy. It even supports animations!
In this tutorial, you’ll learn to use a multitude of
ConstraintLayout‘s features by building an app for a space travel agency. In the process, you’ll learn how to:
- Convert from other types of layouts to
ConstraintLayout.
- Dynamically position UI elements onscreen in relation to other elements.
- Animate your views.
Note: This tutorial assumes you are familiar with the basics of Android, Kotlin and
ConstraintLayout. If you’re new to Android, check out our Beginning Android tutorial. If you know Android but are unfamiliar with Kotlin, take a look at Kotlin For Android: An Introduction. To catch up on
ConstraintLayout, check out ConstraintLayout Tutorial for Android: Getting Started
Raze Galactic — An Intergalactic Travel Service
During this tutorial, you’ll build an interface for an intergalactic travel app which lets users book trips between planets, plan weekend space station getaways and make moon rover reservations to get around when they reach their destination.
Getting Started
Use the Download Materials button at the top or bottom of this tutorial to download the starter project.
Open the starter project in Android Studio. Build and run the app.
There are many elements in this app. You’ll learn how to display them properly using a complex
ConstraintLayout in this tutorial.
To start, go to Android Studio and open the layout file for this app, activity_main.xml, in Design view. Notice the structure of the layout is a series of nested
LinearLayouts and
RelativeLayouts.
ConstraintLayout is not the best choice for simple layouts, but it’s great for complex layouts like the one in this tutorial.
Converting a Layout to ConstraintLayout
In the Component Tree in Design view, right-click on the top-level
LinearLayout and select Convert LinearLayout to ConstraintLayout from the context menu:
Next, you should get a pop-up dialog with some options:
Accept the defaults after reading what they do and click on OK to dismiss the dialog and convert the layout. Android Studio will then attempt to remove all the nested layouts and convert your layout to
ConstraintLayout.
At this point, you may need to give Android Studio a moment to do some processing as it tries to figure out the new layout. After a moment, your layout may look like this:
After another moment, all your views may just jump into the upper left corner of the layout. If this happens, don’t panic!
Note: Make sure to turn off Autoconnect for this tutorial. Find this option in the toolbar of the design editor when you have
ConstraintLayout selected.
Removing Inferred Constraints
During the conversion process, Android Studio performs a number of steps. The last one may have been Infer Constraints, whose results might not quite be what you wanted. ;] If that’s the case, simply go to the Edit menu and choose Undo Infer Constraints:
Alternatively, you can simply press ⌘-Z on Mac or Control-Z on Windows.
In the Design view of Android Studio, are using a different device, the views may look different than they do in the screenshots. You can change this setting in the toolbar.
Don’t spend a lot of time trying to get the layout exactly like it was before. At this point, you just want a very rough estimation to get yourself visually oriented. You’ll add all the constraints you need to make it look perfect throughout the rest of this tutorial.
When you are done, your layout may look something like this:
If Android Studio added any constraints automatically as you dragged the views around, just click the Clear All Constraints button to get rid of them.
One last thing before putting theses elements in their final places, change the
ID of the root
ConstraintLayout to be constraintLayout.
Resizing the Images
Next, fix the image sizes by clicking on each of the icons,
spaceStationIcon,
flightsIcon, and
roverIcon, at the top. Then, in the Attributes panel, change the layout_width and layout_height properties from
wrap_content to 30dp.
You’ll see a bunch of errors listed in the Component Tree. These appear because Android doesn’t have any information from constraints to tell it where to position the UI elements. You’ll start fixing that problem now.
Note: Android Studio offers various
ConstraintLayout tools to save you time, but they don’t always do what you expect. It helps to visualize what the constraints should do before you start to add them. That way, if Android Studio’s tools misbehave, you can add individual constraints one at a time to achieve the effect you want.
Keep this in mind as you use the Align menu and other tools in the steps below: if Android Studio doesn’t do what you expect, go back and add the individual constraints yourself.
Adding Constraints: Figuring out Alignment
You’ll set your constraints with a top-down approach, starting with the elements at the top of the screen and working your way down to the bottom.
You want the three icons at the top of the screen to line up with each other horizontally. Then you’ll center the labels under each of those icons.
Constraining the First Icon
First, you’ll constrain
spaceStationIcon above the word “Space Stations” to the top of the screen.
To do this, click on
spaceStationIcon to select it and reveal its constraint anchors. Click on the top anchor and drag it to the top of the view. The icon may slide up to the top of the view. Don’t connect its left constraint yet.
With the
spaceStationIcon selected, drag it down from the top so that there’s a little space between the top of the view and the rocket.
Next, switch to Code view and examine the updated XML for the rocket icon. You have added one new constraint,
app:layout_constraintTop_toTopOf="parent", and a top margin attribute for the space between the rocket and the top of the view. Update the code to set the margin to 15dp.
The XML for
spaceStationIcon should now look like this:
<ImageView android:
You can adjust the margin in the Design view as well. To do this, switch to Design view and click on the Attributes tab on the right, if it’s not already visible.
Next, click on the
spaceStationIcon to reveal the attributes for that image. After
ID,
layout_width and
layout_height, you’ll see a graphic representation of the margins.
You can pick a new value for the margin by choosing it from the drop-down menu or by clicking on the number and entering a new value.
Aligning the Top Three Icons Horizontally: Using Chains
Next, you want the three icons at the top of the screen to line up in a row with equal spacing between them. To achieve this, you could add a bunch of individual constraints for each icon. However, there’s a much faster way to do this, using chains.
Chains
A chain occurs whenever you have bi-directional constraints. You won’t necessarily see anything special in the XML; the fact that there are mutual constraints in the XML is enough to make a chain.
Whenever you use alignment controls from the menu, such as Align Horizontal Centers, Android Studio is actually applying a chain. You can apply different styles, weights and margins to chains.
Start by switching back to Design view. Shift-click to select all three icons at the top of the screen:
spaceStationIcon,
flightsIcon and
roverIcon. Then right-click to bring up the context menu and select Center ▸ Horizontally. This will automatically create a chain and generate constraints.
In the Design view, you can see that some of the lines representing the constraints look different than others. Some look like squiggly lines, while others resemble a chain.
Exploring Chains
To explore some of the chain modes, click the Cycle Chain Mode button that appears at the bottom of the icons when you select them.
The modes are:
- Packed: The elements display packed together.
- Spread: The elements spread out over the available space, as shown above.
- Spread inside: Similar to spread, but the endpoints of the chain are not spread out.
Make sure you end with spread as the selected chain mode. You’ll know this is selected one of two ways:
- The view will display with the icons spaced as they are in the example screenshot
- The attribute
app:layout_constraintHorizontal_chainStyle="spread"will be on one of the image views. Updating this attribute is another way to change the chain mode.
Aligning Views
Again, select the three icons. From the tool bar, select Align ▸ Vertical Centers. Android Studio should add constraints to the images to align the bottom and the top of each image to its neighbor.
Your layout should now look like this:
If your layout doesn’t match this image, check the Text and Design views. If you’ve lost the original constraint between
flightsIcon and the top of the view, and if
spaceStationIcon didn’t get the constraints you expected, press ⌘ + Z on Mac, or Control + Z on Windows to undo.
Then, manually add the constraints by clicking on the top constraint anchor of
spaceStationIcon and dragging it to the top constraint anchor of
flightsIcon, and so on, until you have added all of the constraints in the diagram above.
Your three icons should now have the following XML:
<ImageView android: <ImageView android: <ImageView android:
Aligning the Text for Each of the Icons
Now that the icons are in place, you’ll need to set their text fields to appear in their proper places.
Select the
TextView labeled Space Stations to reveal its constraint anchors. Constrain the left side of the Space Stations
TextView to the left side of the space station icon and the right side of the Space Stations
TextView to the right side of the space station icon. This centers it vertically with the icon.
Then change the default margins in the tool bar to 15dp and just drag from the top anchor of the label to the bottom anchor of the icon, which will set both the constraint and the margin in a single step. Do the same for the other labels to align them to their icons.
Now, the constraint errors for the top two rows of UI elements should be gone. The XML for the top three images and labels should look like this:
<TextView android: <TextView android: <TextView android:
Using Guidelines
So far, you’ve constrained UI elements to their parent containers and to each other. Another option you have is to add invisible guidelines to the layout and constrain UI elements to those guidelines.
Recall that in the final layout, the double arrows image should be centered and should overlap the two green views.
Setting the Horizontal and Vertical Guidelines
Select the double arrows icon and set the height and width to 60dp. Then right-click on it and choose Center ▸ Horizontally in Parent from the context menu.
For each of the green
TextViews, you’ll now set the width to 124dp and the height to 98dp.
To make the double arrows icon overlap the two green
TextViews, you’ll constrain the right side of the left
TextView to the right side of the double arrows icon and set the right margin to 40dp.
Similarly, constrain the left side of the right
TextView to the left side of the double arrows icon and set the left margin to 40dp.
Lastly, constrain the top and bottom of the
TextViews to the top and bottom of
doubleArrowsIcon.
Next, click on the Guidelines menu in the toolbar and select Add Horizontal Guideline.
This will add a horizontal dashed line to the layout.
Select the horizontal guideline using the Component Tree in Design view. In the attributes inspector, change the
ID of the guideline to guideline1. Note the guideline properties:
layout_constraintGuide_begin and
layout_constraintGuide_percent.
For the horizontal guideline, set
layout_constraintGuide_begin to 200dp.
Finally, add a vertical guideline, ensure that you’ve set its
ID to guideline2 and set its
layout_constraintGuide_percent to 0.05. This positions guideline2 to 5% of the screen width from the left.
Positioning the Guidelines
You can position guidelines using one of these three attributes:
- layout_constraintGuide_begin: positions a guideline with a specific number of dp from the left (for vertical guides) or the top (for horizontal guides) of its parent.
- layout_constraintGuide_end: positions a guideline a specific number of dp from the right or bottom of its parent.
- layout_constraintGuide_percent: places a guideline at a percentage of the width or height of its parent.
After you constrain elements to the guidelines, you can constrain other elements to them. That way, if the guideline changes position, everything constrained to the guideline, or to the other elements fixed to the guideline, will adjust its position.
Hint: Later, you’ll use this feature to create some cool animations!
Adding Constraints to Guidelines
Now that your guidelines are set up, you can start adding constraints to them.
First, for the double arrows icon:
- Constrain the bottom to the horizontal guideline.
- Set the bottom margin to 40dp.
For the switch:
- Set the width to 160dp.
- Constrain the left side to the vertical guideline.
- Constrain the top to the parent (top of the screen).
- Set the margin at the top to 200dp.
For the label beneath the switch listing the number of travelers:
- Constrain the left side to the vertical guideline.
- Constrain the top to the bottom of the switch.
For the galaxy icon (id is
galaxyIcon):
- Set the width and height to 90dp.
- Constrain the top to the horizontal guideline.
- Constrain the bottom to the bottom of the parent (bottom of the screen). This will center it between the horizontal guideline and the bottom of the screen.
- Center it horizontally in the parent view.
For the rocket icon to the left of the galaxy icon (ID is
rocketIcon):
- Set the width and height to 30dp.
- Constrain the rocket icon’s top, bottom, and right sides to the top, bottom, and left sides of the galaxy icon, respectively.
Finally, for the DEPART button at the bottom:
- Change the width from
wrap_contentto
match_parent.
- Constrain its bottom to the bottom of the parent (bottom of the screen).
At this point, you should have set all the constraints Android Studio needs to figure out the layout; there should be no errors in the Component Tree. Your layout should now look similar to this:
Your layout looks great now! But clicking the button doesn’t do anything… so it’s time to add some pizzaz with a few simple animations!
Circular Position Constraints
In addition to the methods that you’ve already learned, you can also constrain UI elements relative to each other using distance and an angle. This allows you to position them on a circle, where one UI element is at the center of the circle and the other is on the perimeter.
To do this, select the rocket icon next to the galaxy icon and update its code in Code view as follows:
<ImageView android:
The first constraint attribute,
layout_constraintCircle, indicates the
ID of the UI element that will be on the center of the circle. The other two attributes indicate the angle and radius.
Why would you want to use such an unusual type of constraint, you ask? Stay tuned, in a moment you’ll use this technique to animate the rocket to fly around the screen!
Note: You can ignore the error in Component Tree for the view using circular constraint. Android Studio doesn’t seem to recognize circular constraint yet.
Build and run the app. Everything should still appear properly positioned onscreen:
Animating the UI Elements on the Screen
Now that you’re a master of laying things out onscreen using
ConstraintLayout, it’s time to add some rocket fuel into the mix and take off to the next level!
In this section, you’ll start with the complex layout you created and add some cool UI animations in just a few steps.
Constraint Sets
Using
ConstraintLayouts, you can use Keyframe Animations to animate your views. To do this, you’ll provide a pared-down copy of your layout file, known as a
ConstraintSet. A
ConstraintSet only needs to contain the constraints, margins and padding of the elements within a given
ConstraintLayout.
In your Kotlin code, you can then apply
ConstraintSet to your
ConstraintLayout to update its layout.
To build an animation, you need to specify a single layout file and a
ConstraintSet to act as the starting and ending keyframes. You can also apply transitions to make your animations a bit fancier.
Setting up the Starting Layout for Your Animation
In your project, duplicate your layout file and name the duplicate keyframe1.xml. You’re going to need to alter the positions of elements in this new layout and set this new layout as the starting layout for the app.
To start, open keyframe1.xml and change the
layout_constraintGuide_begin property of guideline1 from 200dp to 0dp. This moves the guide, the elements constrained to the guide, and all elements constrained to them higher up, so that some of them are are now offscreen.
Then change the
layout_constraintGuide_percent property of guideline2 from .05 to 1. This moves the guide and the elements constrained to it to the far right so that they are offscreen as well.
Now, we’ve changed the layout by just moving a couple of guides, but we still need to make this new layout the starting layout label and the arrival and destination space ports no longer appear on the screen. Additionally, the rocket and universe icons have moved up:
Animating the View
Change the following import statement in your MainActivity.kt Kotlin class:
import import kotlinx.android.synthetic.main.activity_main.*
to the following:
import kotlinx.android.synthetic.main.keyframe1.*
This allows you to reference UI elements in the new layout XML without any
findViewById() craziness from the pre-historic days of Android development. :]
Next, add the following private properties to the class. You may need to add the
android.support.constraint.ConstraintSet import:
private val constraintSet1 = ConstraintSet() private val constraintSet2 = ConstraintSet() private var isOffscreen = true
The first two properties are the constraint sets that you’ll use to animate your view. You will use the boolean to keep track of the layout state.
Transition Manager
You can use the Transition Manager class to handle transitioning from one keyframe to another. To create a layout animation, you simply provide Transition Manager with the
ConstraintSet you want to animate and it will handle the rest. Optionally, you can provide it with custom animations to perform.
Now, add the following to the
onCreate() function, importing
TransitionManager:
constraintSet1.clone(constraintLayout) //1 constraintSet2.clone(this, R.layout.activity_main) //2 departButton.setOnClickListener { //3 //apply the transition TransitionManager.beginDelayedTransition(constraintLayout) //4 val constraint = if (!isOffscreen) constraintSet1 else constraintSet2 isOffscreen = !isOffscreen constraint.applyTo(constraintLayout) //5 }
- This pulls the layout information from the initial layout into one of the constraint sets,
constraintSet1. Since you added an
IDto the
ConstraintLayoutearlier, you can refer to it directly from code now.
- This pulls the layout information from the final layout into
constraintSet2. Since you are creating a
ConstraintSetand you never actually inflate the second layout file, you avoid the overhead and performance hit of dealing with a second layout.
- This adds the animation in the listener for the button, for now, so that you can trigger the animation whenever it’s toggled.
- This calls Transition Manager’s
beingDelayedTransitionfunction.
- This applies the new
ConstraintSetto the currently displayed
ConstraintLayout.
Build and run the app. Click the button at the bottom of the screen repeatedly to see how the animation works.
Voila! The app loads with a bunch of elements offscreen. When you tap the button, the guide positions animate, which causes everything constrained to them to animate as well.
Animating the Bounds of a View
Not only can you change the position of elements onscreen by affecting their constraints, but you can also change their size.
Open keyframe1.xml and select the galaxy icon, whose ID is
galaxyIcon. Change the
layout_height property from 90dp to 10dp.
Note: In activity_main.xml, the height is still set to 90dp.
Build and run the app and tap the button at the bottom repeatedly. Now you can witness the expansion of the galaxy in action! :]
Using Custom Transitions to Make Animation Easier
You now have a couple of animations tied to the switch, but wouldn’t it be nice for the view to animate automatically when it first loads?
You’ll do that next, but first you’ll create a custom animation instead of using the default animation, and you’ll also customize the animation’s timing.
Add the following function to MainActivity.kt, adding the import for
android.transition.AutoTransition if it isn’t added automatically:
override fun onEnterAnimationComplete() { //1 super.onEnterAnimationComplete() constraintSet2.clone(this, R.layout.activity_main) //2 //apply the transition val transition = AutoTransition() //3 transition.duration = 1000 //4 TransitionManager.beginDelayedTransition(constraintLayout, transition) //5 constraintSet2.applyTo(constraintLayout) //6 }
- Activities can’t draw anything while the view is animating.
onEnterAnimationComplete()is the point in the app life cycle where the view animation has completed and it’s safe to call on drawing code.
- This pulls the layout information from your final layout into
constraintSet2.
- This creates a custom transition. In this case, you are using a built-in transition,
AutoTransition(), which first fades out disappearing targets, then moves and resizes existing targets, and finally fades in appearing targets.
- This sets a duration of 1,000 milliseconds for the animation, so that it’s slow enough to be seen.
- This calls Transition Manager’s
beingDelayedTransitionfunction, but this time you also supply your custom transition.
- This applies the new
ConstraintSetto the currently-displayed
ConstraintLayout.
Build and run the app. Now, all of the animations occur as soon as the view loads.
Animating the Circular Constraint
Remember that funny circular constraint you added earlier? Time to add the grand finale animation by flying the rocket around the galaxy!
To animate the rocket around the galaxy, you have to alter two properties: the angle of the circular constraint, which moves the position of the rocket around the circle, and the rotation of the rocket to complete the illusion. You also check the One Way / Round Trip switch value to determine whether the rocket should fly half a circle or one full circle.
Replace the click listener for the DEPART button in
onCreate() as follows:
departButton.setOnClickListener { //1 val layoutParams = rocketIcon.layoutParams as ConstraintLayout.LayoutParams val startAngle = layoutParams.circleAngle val endAngle = startAngle + (if (switch1.isChecked) 360 else 180) //2 val anim = ValueAnimator.ofFloat(startAngle, endAngle) anim.addUpdateListener { valueAnimator -> //3 val animatedValue = valueAnimator.animatedValue as Float val layoutParams = rocketIcon.layoutParams as ConstraintLayout.LayoutParams layoutParams.circleAngle = animatedValue rocketIcon.layoutParams = layoutParams //4 rocketIcon.rotation = (animatedValue % 360 - 270) } //5 anim.duration = if (switch1.isChecked) 2000 else 1000 //6 anim.interpolator = LinearInterpolator() anim.start() }
- Set
startAngleto the current angle of the rocket before animation start. Depending on One Way / Round Trip switch,
endAngleis either 180 or 360 degree in addition to
startAnglevalue.
ValueAnimatorclass provides a simple timing engine for running animations between two values. Here you provide
startAngleand
endAngleto create the instance of
ValueAnimator.
- Inside update listener of
ValueAnimatorinstance, obtain the animated value and assign it to the rocket’s
circleAnglein! | https://www.raywenderlich.com/9475-constraintlayout-tutorial-for-android-complex-layouts | CC-MAIN-2022-33 | refinedweb | 3,982 | 55.84 |
--------------------------------------------------------------------------- Debian Weekly News Debian Weekly News - May 4th, 2004 --------------------------------------------------------------------------- Welcome to this year's 18th issue of DWN, the weekly newsletter for the Debian community. The [1]debian-devel list carried many discussions about releasing sarge in light of recent editorial changes to the [2]social contract, leading to more general resolutions and much cross-talk. 1. 2. Several General Resolutions proposed. Henning Makholm [3]summarised all recently proposed general resolutions on the [4. 3. 4. Bootstraping Debian from Knoppix. Norbert Tretkowski wrote a [5]short howto on installing Debian stable using [6]Knoppix and [7]debootstrap. This method is helpful especially when the boot-floppies from woody don't work with your hardware, but you still want to install Debian stable. 5. 6. 7. Is there still a Place for Debian Planet? The Debian Planet staff [8]wondered if the [9]Debian Planet (DP) news website is still serving a useful purpose for the community, especially in light of the more popular and confusingly-similar named [10]Planet Debian weblog site. Several readers replied that they would miss DP if it went off air and gave suggestions on possible changes to the site. DP is always looking for news stories, especially longer articles, tips and tricks, or Debian specific HOWTOs. 8. 9. 10. Preliminary Schedule for Debian Conference 4. A preliminary [11]schedule has been published for the upcoming [12]Debconf, the annual Debian conference, which will take place in Porto Alegre, Brazil this year. Andreas Schuldei [13]said the program is so good you should attend, if necessary selling your car to raise the airfare. He also [14]announced a limited amount of travel support for developers. 11. 12. 13. 14. Debian Installer Beta-4. Joey Hess [15 [16]retrospective on the difficult installer release process, with suggestions on how to improve things next time. 15. 16. Debian Conference Status Report. Pablo Lorenzzoni [17]reported that the registration period for the [18. 17. 18. New /media Hierarchy. Joey Hess [19]reported that new versions of [20? 19. 20. Naming Scheme for PEAR and PECL Packages. Andreas Tille is in [21]need of some PHP PAER packages for the [22]debian-med sub-project. Steve Langasek [23]pointed out that the current consensus among the php4 maintainers is that packages for PEAR modules ought to be called php-*, and packages for PECL modules (and extensions shipped with php4) ought to be called php4-* and php5-* as appropriate. 21. 22. 23. Request to add Package Tags. Enrico Zini [24]noticed that many developers still don't know that they can tag their packages using [25]debtags, nor how this can be done. He explained how developers can add tags to their packages and asked them to add them. Tags are grouped by "facet" or "namespace", which basically is a different point of view from where to look at the package archive. 24. 25. Draft Position Statement on the GFDL. Manoj Srivastava [26]started a new discussion on a draft position [27]statement on the GNU [28]Free Documentation License. The Debian project has been [29]discussing problems with the FDL since November 2001. 26. 27. 28. 29. Power Management in Debian. Matthias Grimm wanted to [30]know how to rearrange the scripts for [31]pbbuttonsd. To become more flexible and maybe share development and infrastructure with other power management systems like apmd the script environment and maybe the interface have to be reformed. 30. 31. Security Updates. You know the drill. Please make sure that you update your systems if you have any of these packages installed. * [32]eterm -- Indirect arbitrary command execution. * [33]mc -- Several vulnerabilities. * [34]libpng -- Denial of service. * [35]rsync -- Directory traversal bug. * [36]flim -- Insecure temporary file creation. 32. 33. 34. 35. 36. New or Noteworthy Packages. The following packages were added to the unstable Debian archive [37]recently or contain important updates. 37. * [38]aespipe -- AES-encryption tool for tar/cpio and loop-aes images. * [39]chan-capi -- Common ISDN API 2.0 implementation for Asterisk. * [40]elog -- Logbook system to manage notes through a Web interface. * [41]hashalot -- Read and hash a passphrase. * [42]knockd -- Small port-knock daemon. * [43]hyperlatex -- Creating HTML using LaTeX documents. * [44]no-ip -- Second-generation Linux client for dynamic DNS service. * [45]odot -- Task list manager written in Gtk2-Perl. * [46]paintlib2 -- C++ class library for image manipulation. * [47]paxctl -- User-space utility to control PaX flags - new major upstream version. * [48]pktstat -- Top-like utility for network connections usage. * [49]qprof -- Profiling utilities for Linux. * [50]tedia2sql -- Converts a Dia diagram to various SQL dialects. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. Orphaned Packages. 8 packages were orphaned this week and require a new maintainer. This makes a total of 165]dcl -- GNU Enterprise - Double Choco Latte. ([53]Bug#247231) * [54]licq -- ICQ clone. ([55]Bug#246972) * [56]ptknettools -- Selection of Internet service clients written in Perl/Tk. ([57]Bug#246855) * [58]raidtools -- Utilities to support 'old-style' RAID disks. ([59]Bug#247155) * [60]raidtools2 -- Utilities to support 'new-style' RAID disks. ([61]Bug#247156) * [62]sphinx2 -- Speech recognition library - default acoustic model. ([63]Bug#246540) * [64]splay -- Sound player for MPEG-1,2 layer 1,2,3. ([65]Bug#246971) * [66]xosview -- X based system monitor. ([67]Bug#246973) 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. | https://lists.debian.org/debian-news/2004/msg00021.html | CC-MAIN-2021-25 | refinedweb | 905 | 59.09 |
Its() method of Date.
3) Created the object of
Timtestamp class and passed the milliseconds that we got in step 2, to the constructor of this class during object creation. It constructs the timestamp using the provided milliseconds value.
import java.sql.Timestamp; import java.util.Date; public class TimeStampDemo { public static void main( String[] args ) { //Date object Date date= new Date(); //getTime() returns current time in milliseconds long time = date.getTime(); //Passed the milliseconds to constructor of Timestamp class Timestamp ts = new Timestamp(time); System.out.println("Current Time Stamp: "+ts); } }
Output:
Current Time Stamp: 2014-01-08 18:31:19.37
References: | https://beginnersbook.com/2014/01/how-to-get-current-timestamp-in-java/ | CC-MAIN-2018-05 | refinedweb | 104 | 58.38 |
iq notes com
e notes com
sticky notes sticky note x sticky notes lite
notes net com
o rvs com lite funciona com win 7
redtube com com
ip com com
itunes com com
virtual dj com com
rapidshare com com
james aol com hotmail com
shufuni com com
ares com com
Speedy Lotus Notes to Outlook migration can be executed in few clicks by making use of Export Notes software. Get Export Notes for effective and easy conversion of Lotus Notes mailbox. The Export Notes software that can be used to achieve perfect result in speedy way without losing single bit of data. Latest version 9. 4 of the software come up with new User Interface and features that allow you to preserve all properties. metadata. attachments. images and docklinks during migration process. Download the demo version to evaluate the working of software. Demo version converts first 16 items per folder. If you are satisfied with the performance of software and want to access it in full mode then. you may order for the license version at just $250 USD. Business License at $500 and enterprise license at just $1500 USD. Official website- www(. )export-notes(. )com. .
lotus notes to outlook migration , lotus notes email in outlook , lotus notes export to outlook , lotus migrate to outlook , migrate lotus notes to outlook , migrate from nsf to pst
Download
AbleBits. com Note&Do is a handy add-in for Microsoft Excel. 2013 32-bit and 64-bit.
Microsoft Office notes , Microsoft Office tasks
TeleNotes is a Microsoft Outlook COM Add-in that lets Outlook users send Telephone messages (or notes) to other Outlook users.
Outlook lan , outlook accounting , outlook webmail , gtd outlook , outlook download , Outlook Antispam , outlook pst , outlook sharepoint , outlook rules
Email client provide a place for users where they can easily manage their tasks and email with performing other functions. There are several email clients available such as - Outlook express. Outlook and Lotus Notes. In previous people used to perform PST to Notes Conversion but due to high installation charges and maintenance charges of Lotus Notes application users attracts toward Notes to Outlook Conversion. To Convert Notes to Outlook users also wants expedient solution to perform complete Notes to Outlook. Judging users needs SysTools Group make available Export Notes software that successfully perform Notes to Outlook containing all of Lotus Notes items such as emails. calendars. attachments. to-do list etc. Export Notes advance v8. 3 available with latest features through this user can make process of Notes to Outlook Conversion more effective. Features of Export Notes: Easy to use. Cost effective. Supports bulk email conversion. Password protected file conversion. Alarm in calendar. No Chance of corruption at the time of Notes to Outlook migration as software creates new PST file after crossing the limit of 20GB. Just know about software by clicking followed link:. .
notes to outlook conversion , notes to outlook , notes to outlook converter , convert notes to outlook , pst to notes conversion , export notes
How to import data from Lotus Notes to Outlook? The answer is- download and install Export Notes software and follow 4 quick and easy steps to accomplish Lotus Notes to Outlook data conversion. First step- after installing the Export Notes in your PC. second- follow the conversion wizard. browse NSF file. third- open and do settings in NSF files and last step is click “Start Conversion” button. See. how easy is that? The only required thing is Lotus Notes and Outlook should be installed and configured. Download Export Notes software in trial version which let you transfer Lotus Notes files to Outlook with 16 items. For unlimited conversion. licensed version is required which can be accessible at just $250 USD. With the help of Export Notes. user can import entire mailbox item such as- Emails (inbox. sent. delete. address book. calendar entries. notes. attachment. images etc. into Outlook in short span of time. To know more visit official website- www(dot)exportnotes(dot)com. .
how to import data from lotus notes to outlook , lotus notes to outlook data conversion , transfer lotus notes files to outlook , lotus notes to outlook , lotus notes files to outlook
Superb 9. 4 version of Export Notes removes all the queries of how to import NSF file into Outlook 2007. It friendly interface really very helpful for non technical persons. Easily without any discomfort import NSF into Outlook. Import Notes NSF files into Outlook all the versions of Lotus Notes and Outlook. Use its filtration option that allows you to import NSF into Outlook folder wise or date. By using our Export Notes 9. 4 users get 100% result. no data corrupt or lose while complex Notes to Outlook Conversion. Resolve your issues by downloading this high speed converter which import Notes NSF files into Outlook with 129 MB data per second. Freebie takes 500 MB or first sixteen mails from NSF to PST. Export Notes never fails to gives accurate and helpful Notes NSF file to Outlook. Purchase these comprehensive Notes to Outlook Conversion software at nominal cost of 250 USD only. .
how to import nsf file into outlook 2007 , import nsf into outlook , lotus notes to outlook , nsf file into outlook , notes to outlook , import notes nsf files into outlook
There are many reasons available to quite Lotus Notes email client but when the user decides to leave Lotus Notes and want to convert their data in some other effective email client like Microsoft Outlook then the process requires safe and reliable third party software which helps users in importing emails from Lotus Notes to Outlook without any risk. Most suggested software Export Notes is only solution that can be used to convert emails from Notes to Outlook in a very effective manner. Users may use it for accurate migration of Lotus Notes data. Because the software has technically advance features so. it’s guaranteed that the NSF to Outlook conversion task will get accomplished perfectly. Converting Lotus Notes emails. calendars. tasks etc in any Outlook edition is now possible with Export Notes tool. Know more about software at www(dot)exportnotes(dot)com. .
importing emails from lotus notes to outlook , convert emails from notes to outlook , nsf to outlook conversion , migration of lotus notes data , lotus notes to outlook
If you are querying about asoftware which help you in changing email platform from Lotus Notes to Outlook then Export Notes is the best solution to have. Whole database export from Lotus Notes to Outlook including emails. calendar. and to-do list etc with the help of specially desingned for NSF to Outlook conversion process. Export Notes. Few days before Export Notes latest version 9. 4 has been released in online market that has more feature and attributes to make Lotus Notes to Outlook migration process easy and accurate. It successfully RUNS with ALL Windows versions. Download Free TRIAL version from website www (dot) exportnotes (dot) com/. Itallow you to converts sixteen items of Lotus Notes into Outlook. For BEST Lotus Mailbox conversion with UNLIMITED Notes information. you have to pay just $250 and $500 for personal and business purpose respectively. .
export from lotus notes , export from lotus notes to outlook , lotus notes migration , convert to pst , migrate nsf to pst , nsf file converter , lotus to outlook 2010 , export notes program
For better company handling a while it is needed to change onto Outlook from Lotus Notes. Furthermore the best possible way to Convert Notes archive to Outlook is Export Notes software use. Outlook is much easier than Lotus Notes in use and has a well accessibility user interface for each customer that is why experts suggest it. If you want to perform Lotus Notes to Outlook migration then you have to select third party migratory and Export Notes software is one of best in this process. It has features like- Bulk Conversion. Password protected file Data Conversion. PST file containing exceeding 20GB data. Easy user etc. Download free demo version which converts first sixteen items of Lotus Notes. Users may order for Full Pro Version at the refundable price of $250 USD only from official website: -. convertnsf. com/convert-notes-archive-to-outlook. .
convert notes archive to outlook , lotus notes archive to outlook , lotus notes to outlook , lotus notes to ms outlook , export notes software , nsf to pst conversion , convert lotus notes emails
In online market there are lots utilities available which have capability to export Lotus Notes files but No one has power to export ALL data from Lotus Notes to Outlook without any alteration or deletion except Export Notes software. It assures best results in Lotus Notes data conversion otherwise return money back in case failure of Export Notes program. To check the software ability. the product vendor released a display copy of software in free mode. Users can download it and experience the features and accuracy of conversion before going to purchase. The software allows users to convert NSF to PST on any Windows Interface including Win8. Win7 and XP. Export Notes is skilled in exporting Lotus Notes mailbox items such as e-mails. address-book. calendars. tasks etc along with accurate properties and metadata to Microsoft Outlook. You may know more about the product at- www(dot)export-notes(dot)com. .
export all data from lotus notes , lotus notes to outlook , in lotus notes data conversion , convert nsf to pst , exporting lotus notes mailbox
Export Notes software adept to import Lotus Notes email to Outlook with ease. No matter which version of Lotus Notes is being used by you and in which version of Outlook you want to import your converted files. Software support all versions or editions of respected email clients. Especially software is designed to convert complete or specified mailbox item e. emails. address-book. calendar. archive etc of Lotus Notes to Microsoft Outlook. The software is fully prepared to execute BULK email conversion at once. It also converts encrypted Lotus Notes files to Outlook and recently added feature in Export Notes supports recurrence of calendar entries. Take a free trial tour of software by downloading display version of it. After testing the software if you are pleased with the performance then why not take a further step by having a licensed key of this Lotus Notes email converter. Know more about software- www(dot)export-notes(dot)com. .
import lotus notes email to outlook , lotus notes to microsoft outlook , lotus notes files to outlook , lotus notes email converter , notes email to outlook
NSF to PST Converter tool is perfect solution to transfer Lotus Notes in Outlook 2007. Latest edition of software v9.. NSF to PST tool successfully runs with Windows 98. Vista & Win7. Acquire the full licensed version of Export Notes by spending a very little cost $250 for Personal use and for Business license $500 USD. exportnotes. com/nsf-batch-export. .
If you are wishing to import Lotus Notes mail to Outlook 2010 including all emails. calendars. attachments. to-do list etc then try Export Notes software. User can cleanly export Lotus Notes emails to Outlook 2010 via this viable software. For Notes to Outlook conversion it requires at least one Lotus Notes version (8. 5 etc) and Outlook version (2010. 97) installed in machine. Software is fully compatible with all Windows edition and doesn’t require any additional installation to execute the import/export task. Evaluate this software by downloading free trial copy of software which transfers 16 items for Lotus Notes to Outlook. For moving emails from Lotus Notes to Outlook without any bound. buy it. Personal License is available at $250 and Business License at $500 only. For more details visit website- www(dot)export-notes(dot)com. .
import lotus notes mail to outlook 2010 , moving emails from lotus notes to outlook , notes to outlook conversion , export lotus notes emails to outlook 2010 , lotus notes to outlook
Many Lotus Notes users inquire this query- How to import Lotus Notes emails to Outlook with all attached files. Here is the solution. Just download professional Notes to Outlook conversion tool. Export Notes which helps users in importing Lotus Notes to Outlook 2010. It executes the whole conversion process in 4 simple steps i. e. Launch software-] select NSF file-] apply filters and settings -] and then start conversion. Converting Lotus Notes mail to Outlook is so simple with it. Download cost-less evaluation version of software that converts 16 files of Lotus Notes mailbox to Outlook. User may securely download this professional utility by visiting official website: www(dot)export-notes(dot)com After using trial copy of software if you feel satisfy then GO for full license key that is available at just 250 US Dollar for single user. But if you want it for your business use then you need to pay 500 US Dollar. .
importing lotus notes to outlook 2010 , how to import lotus notes emails to outlook , notes to outlook conversion tool , converting lotus notes mail to outlook , lotus notes mailbox to outlook
Many users have selected Export Notes to convert Lotus Notes to Outlook 2007. If you also want to grab some effectual program for your Lotus Notes to Outlook conversion need then we prefer Export Notes for to accomplish your task. If you want to try it then. download free trial version to convert Notes to Outlook as well as to check capability of software. It provides result or execute NSF to PT data conversion on any Windows Platform versions like 98. Vista and Win7. Lotus Notes and Outlook email applications must be installed in target machine to initiate the conversion process. Trial edition is available at free of cost that converts first sixteen items per folder of NSF to PST. After using it if you are satisfied with display version. and acquire full licensed version of Export Notes Software then order online for Personal License only at $250 and Business License at $500. convertnsf. com/convert-lotus-notes-to-outlook-2007. .
convert lotus notes to outlook 2007 , lotus notes to outlook 2007 , lotus notes to outlook conversion , convert notes to outlook , export lotus notes database to pst , nsf to pst , convert nsf files to pst
Avail Export Notes Software to Move Notes to Outlook that is stuffed with advance features and easy to graspable Graphical User Interface. This Lotus Notes to Outlook Migration Tool give authority to user to Move Notes to Outlook having items such as mailbox. attachments. address book. calendar. appointments etc. To Import Lotus Notes to Outlook you should have Lotus Notes version like 8. 5 and Microsoft Outlook version 2010. and 97. Export Notes software is compatible with ALL Windows versions such as Win7. Me. NT. For you NSF to PST Free Demo version is published by which you can evaluate software as well as convert first sixteen NSF files to PST. Full version offered at $250 for (Single user) or $500 for (Multi user). Website -. convertnsf. com/move-lotus-notes-archive. .
move notes to outlook , moving from lotus notes to outlook , nsf to pst free , lotus notes to outlook migration , import lotus notes to outlook , migrate nsf to pst. calendar. journal entries. notes. & email Meta data like to. attachment. and text into Outlook ANSI or UNICODE PST as you want. NSF to PST Converter successfully accurately Convert Lotus Notes data in Outlook. NSF to PST Converter supports Notes Conversion of NSF files created in any of the Lotus versions 8. It successfully runs with Windows versions 98. Vista & Win7. You can get Free Tool to Convert NSF to PST by visiting our website -. exportnotes. com/emails & download DEMO version of Export Notes software. It allows you to Convert Fifteen items of Lotus Notes to Outlook. Latest version 8. 1 of SysTools Export Notes providing you facility to convert password protected NSF files into Outlook effortlessly. No need to wait anymore just Grab Lotus Notes Conversion because it successfully converts UNLIMITED NSF data in Outlook in few clicks. For more information of Lotus Notes Files Conversion you can purchase Export Notes Personal License $250 for SINGLE User & $500 for MULTI Users. If you get any problem in Lotus Data Convert into Outlook contact our technical staff ANY time 24x7. .
convert nsf to pst tool , convert nsf to pst free , free tool to convert nsf to pst , nsf to pst converter , conversion lotus notes outlook , migrate lotus notes into outlook , connect lotus to outlook , lotus data convert
Get easy and efficient tool to convert Lotus Notes NSF Files to MS Outlook as Export Notes software which easily exports Lotus Notes NSF files to Outlook PST files and extremely helpful to import NSF file to Outlook. It is a most recommended solution ever for the query like how to convert Lotus Notes mail to Outlook. Try this wonderful Lotus Notes database viewer and without any technical effort switch to Outlook from Lotus Notes. For users ease we are providing step by step guidance at the time of conversion. Brilliant Export Notes software effectively supports Internet Header and folder mapping. Software is available at $250 for Personal License and $500 for Business License. convertnsf. com/convert-nsf-file-to-outlook. .
convert lotus notes nsf files to ms outlook , how to convert lotus notes mail to outlook , lotus notes nsf files to outlook pst files , lotus notes database viewer , import nsf file to outlook
Filter: All / Freeware only / Title
OS: Mac / Mobile / Linux
Sort by: Download / Rating / Update | http://freedownloadsapps.com/s/e-notes-com/ | CC-MAIN-2018-30 | refinedweb | 2,897 | 63.09 |
def _replace(self, **kargs):
for item in kargs.items():
if item[0] not in self._fields:
raise TypeError
if self._mutable:
for item in kargs.items():
self.__dict__[item[0]] = item[1]
else:
new_self = self
for item in kargs.items():
new_self.__dict__[item[0]] = item[1]
return new_self
I am working with a class where one if its arguments is mutable which is either True or False to determine if the class willl be mutable. This is saved under
self._mutable. So as you can see in this code, I tried to make a completely seperate copy of the class but it still refers to the old one when I try to call this _replace method.
You can use deepcopy
import copy new_obj = copy.deepcopy(obj) | http://www.dlxedu.com/askdetail/3/8dcf5cf0522381b658a21cff14322f07.html | CC-MAIN-2019-35 | refinedweb | 126 | 76.22 |
Working with JSON data in Python
JSON is a very popular data format. Its popularity is probably due to it’s simplicity, flexibility and ease of use. Python provides some pretty powerful tools that makes it very easy to work with JSON data.
Python has a builtin module called JSON which contains all the features you need to deal with exporting, importing and validating JSON data.
What is JSON?
JSON stands for JavaScript Object Notation. It comes from JavaScript, but can be used in any programming language. It can be used to transfer or store data in a simple human-readable format.
It is a subset of JavaScript (so it is executable with eval, but you should never ever do that, as it can lead to very serious security issues)
It is important to note, that JSON is not a concrete technology it is just a standard for describing data. So it does not define things like maximal string length, biggest available integer or floating point accuracy - however the underlying language or a certain implementation of a JSON parser will certainly have these kinds of limitations.
Why is JSON so popular?
- Generating and parsing JSON is easy for machines
- JSON is a human-readable data format
- It is extremely simple
- Despite of it’s simplicity, it’s still quite powerful and flexible
What does JSON data look like?
As I mentioned above, JSON is a subset of JavaScript, but it has some restrictions. Basically, you can define JSON objects the way you would define objects in JavaScript.
An example of a piece of JSON data:
{ "exampleString": "hello", "exampleObject": {"field": "value"}, "exampleNumber": 1234, "exampleArray": ["aString", 1234, {"field2": "value2"}] }
Note, that the syntax is a bit stricter than in JavaScript:
- JSON objects cannot have field names without the surrounding double quotes (
{field: "value",}is invalid)
- JSON strings must be enclosed in double quotes - single quotes are not allowed (
{"field": 'value',}is invalid)
- Trailing commas after the last field are not allowed in JSON objects (
{"field": "value",}is invalid)
JSON data types
JSON defines four data types:
string,
number,
object,
array, and the special values of
"true",
"false" and
"null". That’s all. Of course arrays and objects can contain strings, numbers or nested arrays and objects, so you can build arbitrarily complex data structures.
JSON strings
JSON strings consist of zero or more characters enclosed in double quotes.
Examples:
"hello world"
"",
JSON number
JSON numbers can be integers or decimals, the scientific notation is also allowed.
Examples:
123,
-10,
3.14,
1.23e-14
JSON object
Objects are a collection of key-value pairs. Keys should be enclosed in double quotes. Keys and values are separated by colons and the pairs are separated by commas. Values can be of any valid JSON type. The object is enclosed in curly braces.
Example:
{"hello": "world", "numberField": 123}
JSON array
JSON arrays can contain zero or more items separated by commas. Items can be of any valid type.
Examples:
["a"],
[1, 2, 3],
["abc", 1234, {"field": "value"}, ["nested", "list"]]
[],
Where is JSON used?
JSON can be used to both transfer and store data.
JSON web APIs - JSON data transfer in HTTP REST APIs
JSON is commonly used in REST APIs both in the request and the response of the body. The clients’ requests are usually marked with the
application/json header. An http client can also indicate that it excepts a JSON response by using the
Accept header.
Example HTTP request:
POST /hello HTTP/1.1 Content-Type: application/json Accept: application/json {"exampleData": "hello world"}
HTTP/1.1 200 OK Content-Type: application/json {"exampleResponse": "hello"}
NoSQL databases
JSON is commonly used for communicating with non-relational databases (such as MongoDB). NoSQL databases let you dynamically define the structure of your data, and JSON is perfect for the task because of its simplicity and flexibility.
JSON in Python - The JSON module
Working with JSON in Python is rather simple as Python has a builtin module that does all the heavy lifting for you. With the help of the
json module you can parse and generate JSON-encoded strings and also read or write JSON encoded files directly.
Working with JSON strings
Exporting data to JSON format
You can turn basic Python data types into a JSON-encoded string with the help of
json.dumps, the usage is pretty simple:
data = { "list": ["hello", "world"], "integer": 1234, "float": 3.14, "dir": {"a": "b"}, "bool": False, "null": None } import json json_encoded_data = json.dumps(data) print(json_encoded_data)
Output:
{ "float": 3.14, "list": ["hello", "world"], "bool": false, "integer": 1234, "null": null, "dir": {"a": "b"} }
Parsing a JSON string
The reverse - parsing a JSON-encoded string into Python objects can be done by using the
json.loads method, like so:
json_encoded_data = '''{ "float": 3.14, "list": ["hello", "world"], "bool": false, "integer": 1234, "null": null, "dir": {"a": "b"} }''' import json data = json.loads(json_encoded_data) print(data)
output
{ 'float': 3.14, 'list': ['hello', 'world'], 'bool': False, 'integer': 1234, 'null': None, 'dir': {'a': 'b'} }
Validating a JSON string
The Python
json module does not have a dedicated way to validate a piece of JSON data, however you can use
json.loads to do that.
json.loads will raise a
JSONDecodeError exception, so you can use that to determine whether or not a string contains properly formatted JSON.
For example, you can define the following function to validate JSON strings:
import json def is_valid_json(data: str) -> bool: try: json.loads(data) except json.JSONDecodeError: return False return True
This function accepts a string as its single argument and will return a boolean. It will try to load the string and if it is not a valid JSON, it will catch the raised exception, and return
False. If the JSON is valid, no exception will be raised, so the return value will be
True.
Working with JSON files in Python
The
json module also makes it possible for you to work with JSON files directly. Instead of
loads and
dumps you can use the
load and
dump methods. These methods work directly on files - they take an extra argument, and instead of reading/writing strings in memory they will let you import/export JSON data from/to the files you pass.
Exporting data to a JSON file
Export JSON data can be done by using the
json.dump function. It takes two arguments, the first is the Python object that you’d like to export, while the second is the file where you want to write the encoded data.
Example usage:
data = { "list": ["hello", "world"], "integer": 1234, "float": 3.14, "dir": {"a": "b"}, "bool": False, "null": None } import json with open('ouptut.json', 'w') as output_file: json_encoded_data = json.dump(data, output_file)
First we opened the file for writing and passed the file handle to
json.dump as its second argument.
output.json will contains something like (added whitespace for readability):
{ "float": 3.14, "list": ["hello", "world"], "bool": false, "integer": 1234, "null": null, "dir": {"a": "b"} }
Parsing a JSON file
Reading JSON data from a file to an in-memory Python object can be done very similarly - with the help of the
json.load method.
This method takes a file as it’s argument - the file that you’d like to read from.
For example, to parse the file that we created in the previous example, we can write:
import json with open('ouptut.json', 'w') as input_file: data = json.load(input_file) print(data)
First we open the file for reading, and then pass the file handle to
json.load
Expected output;
{ 'float': 3.14, 'list': ['hello', 'world'], 'bool': False, 'integer': 1234, 'null': None, 'dir': {'a': 'b'} }
Validating a JSON file
To validate that a file contains valid JSON data, we can use the
json.load method and try to load the JSON contained in the file. On failure we can catch the
JSONDecodeError raised by
json.load. If no exception occurs, the file contains valid JSON.
import json def is_valid_json_file(input_file: str) -> bool: try: with open(input_file, 'r') as f: json.load(f) except json.JSONDecodeError: return False return True | https://pythonin1minute.com/working-with-json-in-python/ | CC-MAIN-2022-21 | refinedweb | 1,344 | 63.19 |
import ; }
Dart is an application programming language that’s easy to learn, easy to scale, and deployable everywhere.
Google depends on Dart to make very large apps.
[ Click underlined text or code to learn more. ]
Core goals
Dart is an ambitious, long-term project. These are the core goals that drive our design decisions.
Provide a solid foundation of libraries and tools
A programming language is nothing without its core libraries and tools. Dart’s have been powering very large apps for years now.
Make common programming tasks easy
Application programming comes with a set of common problems and common errors. Dart
Dart might seem boring to some. We prefer the terms productive and stable. We work closely with our core customers—the developers who build large applications with Dart—to make sure we’re here for the long run. | https://www.dartlang.org/ | CC-MAIN-2016-44 | refinedweb | 140 | 76.01 |
To get started with writing a .NET app for AutoCAD, download the ObjectARX SDK for AutoCAD 2007. Contained within the samples/dotNet folder of the SDK are a number of helpful samples showing how to use various features of the managed API to AutoCAD.
Incidentally, the project files etc. are generally saved in the version of Visual Studio that is recommended to build ObjectARX (C++) apps for that version of AutoCAD. So the projects in the ObjectARX 2006 SDK will be for Visual Studio .NET 2002, and in ObjectARX 2007 they will be for Visual Studio 2005. These specific Visual Studio versions are not strictly necessary to use the managed APIs for the respective versions of AutoCAD (that's one of the beauties of .NET, in that it helps decouple you from needing a specific compiler version), but for consistency and our own testing we maintain the parity with the version needed to build ObjectARX/C++ applications to work with AutoCAD.
The simplest sample to get started with is the classically named "Hello World" sample, which in this case is a VB.NET sample. I won't talk in depth about any of the samples at this stage; I'm going to focus more on how to use the ObjectARX Wizard to create a VB.NET application.
In the utils\ObjARXWiz folder of the ObjectARX SDK, you'll find the installer for the ObjectARX Wizards (ArxWizards.msi). I'm using the Wizard provided with the ObjectARX SDK for AutoCAD 2007.
Once installed, you can, of course, create new ObjectARX/C++ projects; we use this tool all the time in DevTech to help generate new SDK samples as well as diagnose API issues reported to us. A relatively new feature is the AppWizard for VB.NET and C#. This is visible when you ask Visual Studio 2005 to create a new project:
Once you select "OK", you will be shown a single page to configure your project settings - all very simple stuff:
Selecting "Finish" will set up the required project settings and generate the basic code needed for your application to define a single command called "Asdkcmd1".
Before we look into the code, what has the Wizard done? It has created a Class Library project, adding a couple of references to the DLLs defining the managed API to AutoCAD. If you select "Add Reference" on the created project, you can see them in the "Recent" list:
There are two AutoCAD-centric references listed here: acdbmgd.dll, which exposes the internal AcDb and supporting classes (common to both AutoCAD and RealDWG), and acmgd.dll, which exposes classes that are specific to the AutoCAD application.
So now let's look at the code. It's really very straighforward - it imports a namespace (which saves us from prefixing certain keywords such as CommandMethod with "Autodesk.AutoCAD.Runtime.") and then defines a class to represent our application module. This class (AdskClass) defines callbacks that can be declared as commands. This is enough to tell AutoCAD that the Asdkcmd1 method needs to be registered as a command and should be executed when someone types that string at the command-line.
Imports Autodesk.AutoCAD.Runtime
Public Class AdskClass
' Define command 'Asdkcmd1'
<CommandMethod("Asdkcmd1")> _
Public Sub Asdkcmd1()
' Type your code here
End Sub
End Class
And that's really all there is to it. To see it working, add a function call to the command function, such as MsgBox("Hello!"), build the app, and use AutoCAD's NETLOAD command to load the resultant DLL. When you type in ASDKCMD1 at the command line, your custom command defined by VB.NET should be called.
Time for some quick credits: a number of the DevTech team have been involved over the years in developing the ObjectARX Wizard (including the recent versions that support .NET) but the chief architect of the tool is Cyrille Fauvel, who is part of the DevTech EMEA team and is based in France. | http://through-the-interface.typepad.com/through_the_interface/2006/07/getting_started.html | CC-MAIN-2017-09 | refinedweb | 658 | 61.16 |
>> almost seems like there are people specifically leading the public on a ride - these are the modern day Pied Pipers.. but perhaps not - maybe we just have over enthusiastic "analysts" who are looking for some TV time spewing forth some plausible scenario, and if it can be believed, it will be. What we need to realize is that when a large number of investors pull their money out of bonds, the yields will perk up, which also means stocks will get hammered initially due to fear and risk concerns.
A lot of people are assuming the return of the housing sector is going to save the day, with people beginning to spend more - but hold on. If risks abate, bond yields would go up, inflation will also perk up, and with our personal savings rate so low and income growth muted, will we really have more money to spend? I doubt the consumer is going to come back the way the analysts expect, and without the consumer I doubt anymore cost reductions are possible to attain more profits, and yet S&P 500 earnings is modeled to generate $112 per share this year and $125 in 2014!
This is simply the bubble effect: the ’wall of money’ will push equity prices higher and higher. It then blindfolds the majority of market players (especially retail investors) and they forget to take the underlying fundamentals into consideration when buying into an equity/reallocating portfolio.
In terms of the mentioned outflow from commercial bank deposits, the outflow from the safe heaven may be rational, when we look at extremely low interest rates.
The lesson, if there is any, is that human nature and irrationality of herd sentiment does not change - investment managers have learned nothing from the 2007/8 crisis, and sector rotations continue despite the invisible elephant in the room: Fed's growing balance sheet of bonds bought with QE cash.
"Risk on" was only a question of time with near-zero interest rates in the US, UK and EU but continuing inflows into their own gilts is puzzling - unless central bankers like the Fed are pressured to 'gift' their bond holdings to government treasury to offset deficits, their sale will crash the bond market and instantaneously switch inflation into high gear over the medium term.
Derivative trades largely remain unregulated and continue to grow systemic risk, widening the value gap between the financial markets and the real productive economy. Libor may be forgotten but similar problems remain unresolved.
We can blindly place our faith in active or index managers, or succumb to our own reactions to manipulated media reports - but given the high price of laziness, it may be better to objectively compare asset managers methods to value investing principles, such as those of the Oracle of Omaha, who take a longer term view of equity investments as if they were buying the underlying company, applying 6 principles without sentiment to buy and hold:
1. Indicators of good management include Share buybacks, Good use of retained earning, companies who stick to what they know
2. Demonstrated earning capacity with a likelihood these will continue, measured by Company growth, providing for inflation, Capital expenditure, Look through earnings and strong Brand names
3. Consistently "higher" returns evidenced by Returns on equity and Returns on capital [6% is more sustainable than 17% where inflation is 2%]
4. a prudent approach to debt, evident from leverage and gearing
5. Simple business which the investor understands
6. If the above criteria are met, investment should only be made at a reasonable price, with a margin of safety considering Price/earnings ratios, Earnings and Dividend yields, Book value, and comparative rates of return.
IF the prospectus and marketing claims of investment managers and deals seem too good to be true, they invariable are - "bubbles" cause pain and loss, when they explode in an investor's face, but asset managers collect their fees either way.
I think the reason people think funds flows determine returns is that people imagine that the market is like a large balloon that inflates. What they don't realize is that for every buyer there is a seller. When we talk about fund flows we're talking about retail fund investors, who may be the least sophisticated of all investors (although they say institutional investors are just as bad). The questions is: who is on the other side of these trades? Sophisticated Wall Street insiders and individual investors? Hedge funds? Why don't we measure flows into different markets by these investors?
The majority of the money invested on the stock market and equities are people's savings for retirement and they have either very little or no control on their money. I cannot understand anyone in USA who has good amount of savings and putting their savings in such high risk investments. Currently, real estate prices are extremely low. If you have savings of 10% of the property price and for the cost, you can find properties that rent can pay all the costs and mortgage payments. After some years as the mortgage gets smaller, your income from rent increases. Even after you get old and pass away at some time, your kids can continue having the same income. This type of investment has the leverage. However, the money put on stocks and equities melts away by time regardless of how good your choice is to invest on. The trick is not buying a property randomly, but find one that rent leaves you extra profit after paying the mortgage and expenses. Even if the extra profit is very tiny at the start, it is still better than any other investment.
I have discussed risk of globalization and feeding large corporations in puppet Corporation...
There is a typo: the great rotation is out of bonds and into equities. it is actually elaborated on correctly, but the opening sentence is wrong.
A scenario in which bonds lose value due to rising interest rates & stocks tread water (as noted) is quite likely. Interestingly, this would partially resolve the difficulties posed by reversing quantitative easing because money supply would contract. The Fed will regard this as "normalization" & will welcome it. It's part of the plan. It would be good for the economy. But it will not be good for many investors. In other words, the necessary contraction of the money supply will take place in pension funds & retirement accounts. The great rotation is not a zero-sum phenomenon, but one that will vaporize money. It's not intrinsically hydraulic, such that money out of bonds must inflate stocks. The money can simply go to money heaven.
What this argument leaves out is the investors' shifting between investment "vehicles". It would be interesting to know whether the massive equity mutual fund outflows since 2008 have been accompanied by an increase in direct stock holdings.
In addition, a Granger test may not be the optimal statistical tool if the relationship in question is not bivariate, but there may be a third, or more variables, influencing both returns and flows. Especially after 2008, one such omitted variable might be investors' attitudes towards financial intermediaries.
Out of equities and into bonds?
"It is part of the odd nature of asset markets that a rise in price causes an increase in demand, not a fall. Conversely, a very sharp fall in an asset price can put investors off for a considerable period."
It is of the utmost importance to remember that financial markets are nothing like ordinary markets in goods and services. Where the latter require very little regulation, the former are quite mad, and ought to be in chains.
Money doesn't move (by itself). People move money. If you know what people want, you can predict where the money's going to go.
What do people want? In investment, they want either safety or return (or, preferably, both, but that's usually not possible).
But most people who are looking for return go where they see a return in the last time period. This leads to bubbles (and margin/leverage makes them much bigger and also more destructive).
What investors who are chasing returns *should* be doing is, they should find things that have gone down, not up, but which have the fundamentals that they should go back up. (This is that whole "buy low, sell high" thing.)
Yes. This is an obvious recommendation. A classic principal. But not so easy. And there is only a true about financial markets. Do not trust professional advisers of financial market. Especially if they work for a bank. Looking for things that have gone down, as you recommend, in July 2010 I invested in a fund linked to Eurostoxx 50. At that time it was around 2.700 points what was considered down. Indeed it rose for a time. I thought the main European companies had, as you define, fundamentals to go back up. Three years later, Eurostoxx 50 hasn´t yet recover the 2.700 points. Conclusion: there is not official rules for invest.
Shouldn't the first sentence have read 'THE big theme of the market this year, as already mentioned in this blog, has been the "great rotation" out of bonds and into equities.'?
Yes it should have.
The big bugaboo is the high-speed trading based on algorithms
(about 75% of trades if I remember the TE article correctly).
.
We don't know what level of retail investment - in terms of trade percentage or money - is needed before the algorithms issue a "sell everything" signal.
.
For the record, in December of 2012 Margin Debt hit 87% of the previous peak of July 2007.
.
Maybe we are getting close to a turn?
.
NPWFTL
Regards
"For the record, in December of 2012 Margin Debt hit 87% of the previous peak of July 2007."
That's the most frightening thing I've heard in a long time.
"For the record, in December of 2012 Margin Debt hit 87% of the previous peak of July 2007."
---
I was starting to wonder if I should begin taking profits... I guess I'm not alone.
"For the record, in December of 2012 Margin Debt hit 87% of the previous peak of July 2007."
Where did you get this stat? I believe you, and it is indeed very interesting. I want to track it myself rather than wait for your next comment to inform me...
(Think you might have it flipped in the first sentence – and there’s a capitalization-typo in second line of the para after the quote.)
.
$1-trillion a year in QE is happening right now. That covers a big slice of the Fed budget deficit, whether all of it is directly in Treasuries or not. The CA deficit of better than half-a-trillion a year has a big impact too - it's rather like more QE in effect. This doesn't feel like a 'crowding-out situation'.
.
Nothing has to be sold for new money to pour into markets of any type - got 'bubble' written all over it IMO, and that's by design. All bubbles break in time, right? | http://www.economist.com/comment/1889543 | CC-MAIN-2015-06 | refinedweb | 1,862 | 61.97 |
Red Hat Bugzilla – Bug 201826
Calls to nice(2) will fail with errno set to EACCES rather than EPERM
Last modified: 2007-11-30 17:11:39 EST
Description of problem:
The man page for nice(2) states:
ERRORS
EPERM The calling process attempted to increase its priority by sup-
plying a negative inc but has insufficient privileges. Under
Linux the CAP_SYS_NICE capability is required. (But see the
discussion of the RLIMIT_NICE resource limit in setrlimit(2).)
However, glibc() converts a call to nice(2) into a call to setpriority(2)
which WILL return with errno set to EACCES.
An strace of the program:
#include <errno.h>
main()
{
if (nice(-19) < 0)
printf("%d\n", errno);
}
shows it in reality calls:
setpriority(PRIO_PROCESS, 0, 4294967277) = -1 EACCES (Permission denied)
Version-Release number of selected component (if applicable):
glibc-2.4.8
How reproducible:
Run the test program above.
Steps to Reproduce:
1. Compile the code snippet above
2. Run
3. See incorrect errno set
Actual results:
failure with errno set to EACCES
Expected results:
failure with errno set to EPERM
Additional info:
This isn't just a man page error, as if the legacy nice syscall is invoked,
it reacts as documented.
For example, this program:
#include <sys/types.h>
#include <sys/syscall.h>
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <errno.h>
main()
{
if (nice(-19) < 0)
printf("%d\n", errno);
if (syscall(__NR_nice, -19) < 0)
printf("%d\n", errno);
}
will return:
$ ./a.out
13
1
Fixed in upstream CVS and glibc-2.4.90-20 in rawhide. | https://bugzilla.redhat.com/show_bug.cgi?id=201826 | CC-MAIN-2016-50 | refinedweb | 262 | 59.5 |
react native expert expo
Bütçe $50-51 USD
I am facing with an issue on the react native project.
if you have experience same issue before, you can do it within a minute :)
I am using ethers for this project because this is blockchain project.
the code is below.
import {ethers} from "ethers"
import "@ethersproject/shims"
but after import shims, suddenly emulator speed is very slow.
I have tried to figure out this issue, but no results.
so you should have experience in this issue before to solve this problem.
please bid if you are sure you can do it soon.
Bu iş için 3 freelancer ortalamada $51 teklif veriyor
Hi, I have carefully checked your requirments that you need an expert App Developer/Designer for your project. I am able to do this job because I have a good command on React Native, React Js, Javascript & All the Ado Daha Fazla
Hello, I would like to be considered for the job post to as a React developer. I'll fix all your issues on the project as per your mentioned requirements. I'm interested in a long term relationship. I have expertise Daha Fazla | https://www.tr.freelancer.com/projects/javascript/react-native-expert-expo/?ngsw-bypass=&w=f | CC-MAIN-2021-49 | refinedweb | 195 | 72.36 |
Checking for the Existence of a File
The File System Task in SSIS doesn’t support checking to see if a file exists. You can work around this easily with a script task. Create a new script task and add a Imports statement referencing the System.IO namespace to the top of the script.
Imports System.IO
Then add the following to the Main method:
If File.Exists(Dts.Connections(“ConnMgrA”).AcquireConnection(Nothing).ToString()) Then
Dts.TaskResult = Dts.Results.Success
Else
Dts.TaskResult = Dts.Results.Failure
End If
This script checks the file referenced by the ConnMgrA connection manager. If it exists, the script task returns Success, meaning execution will follow the Success constraint from the Script Task. If the file does not exist, the task will fail, and the Error constraint will be used. You could also set a variable with the results, and use that in an expression on a precedence constraint.
If, instead of using a connection manager, you want to get the file name from a variable, you can replace the If statement with the following:
If File.Exists(ReadVariable(“FileNameVariable”).ToString()) Then
The variable locking is occurring in the ReadVariable method. To see the definition for it, please refer to Daniel Read’s blog post here. This is a good practice to follow when working with variables in Script Tasks.
There is a Connect posting here requesting that the File System Task be enhanced to support checking for a file’s existence. If you’d like to see this in a future version, please vote for issue. | http://agilebi.com/jwelch/2007/11/01/checking-for-the-existence-of-a-file/ | CC-MAIN-2018-30 | refinedweb | 262 | 65.62 |
C++ Callback Demo
This article was contributed by Elmue.
Environment: Pure C++. Runs on Windows, Mac, Linux, and so on.
Introduction
This fully functional example shows how in C++ callbacks can be done in an absolutely flexible way!
Callbacks in C++ are not as simple as in C. Pure C functions are from the type __cdecl. C++ functions are from the type __thiscall. (They differ in the way how they pass arguments on the stack.)
In C++, you have classes and, additionally, instances of classes. Every instance uses its own memory area for storing class variables. The pointer to this area of variables is the "this" pointer. It represents the instance. Every time you call any C++ function, the "this" pointer is passed to the function as an invisible parameter! (M$ Visual Studio 6 uses the processor register ECX to pass the "this" pointer.)
So, in C++ it is not enough to store the address of the function, which you want to call back. You also have to store the "this" pointer!
Using the Callback Class
You can include "Callback.h" into your project. The usage it is very simple because the cCallback class has only two member functions: SetCallback() and Execute(). You can understand the following examples without knowing what is happening inside cCallback.
cMyProject.h:
#include "callback.h" private: // the functions of your project void CallbackFox (int Param); void CallbackRabbit(int Param); void TestTheCallback(cCallback *pCallbackFunction, int Param); // Some instances of the Callback class TCallback<cMyProject> i_Callback_1; TCallback<cMyProject> i_Callback_2;
cMyProject.cpp:
void cMyProject::CallbackRabbit(int Param) { char Buf[50]; sprintf(Buf, "Now I'm in Rabbit with Param %d !\n", Param); printf(Buf); } void cMyProject::CallbackFox(int Param) { char Buf[50]; sprintf(Buf, "Now I'm in Fox with Param %d !\n", Param); printf(Buf); } void); }
If you call cMyProject::CallbackDemo(), the output will be:
Now I'm in Rabbit with Param 16 ! Now I'm in Fox with Param 25 !
Callback Re-Definitions
It is also possible to re-define the callback with SetCallback() as often as you like:
void cMyProject::CallbackDemo() { i_Callback_1.SetCallback(this, &cMyProject::CallbackRabbit); TestTheCallback(&i_Callback_1, 4); i_Callback_1.SetCallback(this, &cMyProject::CallbackFox); TestTheCallback(&i_Callback_1, 5); }
The output would be the same, but i_Callback_2 is not needed anymore.
Callback Arrays
It is also possible to use arrays of callbacks:
cMyProject.h:
private: TCallback<cMyProject> i_Callback[10];
cMyProject.cpp:
void TestTheCallback(int Index, int Param) { i_Callback[Index].Execute(Param * Param); } void cMyProject::CallbackDemo() { i_Callback[0].SetCallback(this, &cMyProject::CallbackRabbit); i_Callback[1].SetCallback(this, &cMyProject::CallbackFox); i_Callback[2].SetCallback(.....); TestTheCallback(0, 4); TestTheCallback(1, 5); }
Callback Arrays, Part 2
In the above example, all callbacks are from cMyProject. In i_Callback you can ONLY store callbacks to the cMyProject class because it is defined as
TCallback<cMyProject>.
If you want to store callbacks to different classes in a callback array, you have to create the array from cCallback instead of TCallback:
cMyProject.h:
private: cCallback *p_Callback[10];
cMyProject.cpp:
void cMyProject::StoreCallback(cCallback *p_CallbackFunction, int Index) { p_Callback[Index] = p_CallbackFunction; }
StoreCallback() then can be called by ANY class to set a callback to itself. For example:
cDemo.h:
private: TCallback<cDemo> i_MyCallback;
cDemo.cpp:
#include "cMyProject.h" extern cMyProject i_MyProject; ...... i_MyCallback.SetCallback(this, &cDemo::MyCallbackFunction); i_MyProject.StoreCallback(&i_MyCallback, Index); ......
You can even later modify i_MyCallback with SetCallback() without having to call StoreCallback() again!!
In the source code (see the download link at the end of this article) you find a different example, and additionally a demonstration of a global callback, which you need, if you want to be called back by the operating system. (Windows API callbacks always go into the global namespace.)
The Callback Class
Finally, here comes the great cCallback class itself. It consits of only a header file without a corresponding cpp file.
Callback.h:
class cCallback { public: virtual void Execute(int Param) const =0; }; template <class cInstance>; };
This class defines an Execute() function that takes one integer parameter and returns no parameter (void). You can simply adapt it to your needs; for example, a callback that takes five paramaters and returns a bool. (Then, you have to modify three lines: the two lines beginning with "virtual void Execute" and the typedef.)
To completely understand this class, you need advanced C++ knowledge. I will not explain all the details here because this would be too much.
Instead I recommend the very good book:
Author: André Willms
Title: C++ Programming (German title: "C++ Programmierung")
Publisher: Addison Wesley
ISBN 3-8273-1495-X
And from my homepage, you can download free C++ books in the compiled HTML format.
ã¸ãã¼ãã¥ã¦ ããã°= by MefeTillQuise on 07/12/2013 05:12am
ããããã®ææï¼ç¬ï¼ã¯ç¹ã«ããã対çãã ãããç«ã¡ã¾ãEO対çã¦ãSEOæãhoo!æ¤ä¸ãããã®å¤§èª²é¡50åã¯ãã ãã©ã夿åºã [url=]ãã¼ã¯ãã¤ãã¼ã¯ã¸ã§ã¤ã³ãã¹ ã¢ã¦ãã¬ãã[/url] ã°ãã ãã¼ã±ã¼ã¹= [url=]ã¸ãã¼ãã¥ã¦ ã»ã¼ã«[/url] ã§ããã®ããé£ãã«ã«æ¿ãæ¿å®ã«ã¯åæãã°è¯ãã ãã§å¤±ãå¤ãï¼ï¼ï¼ã¨ãã£ãã¨ããå¤ãã£ã¦ãã®ãããªæããåç´ã®äººãªã©ã«ã广çãªãã¼ã ãã¼ã¸ã§ã¯ãªããYahoã£ã¦ãï¼ [url=]ã¸ãã¼ãã¥ã¦ é´[/url] ãã¼ã¯ãã¤ãã¼ã¯ã¸ã§ã¤ã³ãã¹ ããã°= [url=]ã³ã¼ã[/url] ãã¼ã¯ãã¤ãã¼ã¯ã¸ã§ã¤ã³ãã¹ æè¨= [url=]GUCCI ããã°æãæã[/url] [url=]ãã¼ã¯ãã¤ãã¼ã¯ã¸ã§ã¤ã³ãã¹ åºè[/url] ã¸ãã¼ãã¥ã¦ åºè= [url=]ã°ãã ãã¼ãããã°[/url] ã¸ãã¼ãã¥ã¦ ã¢ã¦ãã¬ãã= ã¸ãã¼ãã¥ã¦ ã¡ã³ãº= [url=]ã¸ãã¼ãã¥ã¦ é´[/url] [url=]ã¸ãã¼ãã¥ã¦ é´[/url] ã°ããããã°ã¢ã¦ãã¬ãã= ã³ã¼ã 財å¸= [url=]ã¸ãã¼ãã¥ã¦ 財å¸[/url] 対çãã¼ã¸ãçºä¿¡ã®ã¦ããããªã§ãã¦ã®æ ã¼ã¸ã®å¶ã§æ¸ã¾ããã®ãæãªããã§æ¿ããã¼ãªãè²´éã ã¨èãã¦ã¾ããããããè¼ã£ã¦ã¾ããã御å覧ããã°ããã«ãã¨ã«ããå å®åº¦ã¯ç¨æãªãã®ã¨ãã«ã®ãã¦ããã¾ã£ãï¼ ã¸ãã¼ãã¥ã¦ ããã°ã¸ãã¼ãã¥ã¦ åºè= ã¸ãã¼ãã¥ã¦ é´=
Wholesale Oakley Big Taco no tax worldwidePosted by tgijsiutj on 07/07/2013 04:29am
FakE OaklEyS ,Oakley can be a unique group of scientific, artistic, unyielding challenge in the traditional contemplating the cultural heritage, but also very persistent. The main products include women's Oakley sunglasses, ski goggles, motocross goggles, golf series, leisure series, special series of five major categories, besides hats, T-shirts, handbags along with peripheral products. When you pick a set of two designer sunglasses, it's guaranteed to become the Oakley sunglasses to elegant and classy colors will really affect the way you look. Cheap Oakley Frogskins ,Oakley sunglasses tend to be the dedicated efforts and expensive high-tech outcomes of the survey. Sunglasses in the past two years has increasingly turned into a real fashion designers than ever before. These Oakley sunglasses are available in the market for all age groups, the main range of colors, styles and designs. Glasses may make up his face disadvantage, people seem more temperament, specifically in sun glasses, to ensure the face is always to choose sunglasses. Experience of your skin, nose pad special high-energy minerals thetemperature reached 32 degrees, you aren't doing the special energy and negative ions, in order to achieve the antioxidant, and gradually the role of cell activity, and promote head blood circulation, thus effectively relieve eye fatigue . Fake Oakleys ,Various Internet Providers for the comfort of the customer's request, select on the list of men and women probably be accurate and appropriate glasses and identity. The outer layer of skin and eyes has reached the risk of solar energy, solar radiation (UV). It had been founded through the rays of the sun and ultraviolet illumination In point of fact, exposure might cause burning in the sun, scaring the organization. Fake Ray Ban ,While using merciless sun beating right, it's not a strong of sunglasses perched on his nose to strengthen outdoor. In year retro massive construction projects, Oakley glasses as essential clothing accessories, and sometimes keep to the outfits go, sometimes they lead the trend. SacS Longchamp ,Cheap male OAKLEY sunglasses more(a) 99% in the film is enhanced polarized glasses shape at the molecular level. Oakley lenses don't simply stop there simply because employ a hydrophobic coating re-execution, preventing moisture build up lenses. Find the ideal company know of high quality and solid guarantee scheme, can not simply mention how much interest and welcome jealous eyes.Reply
ã³ã¼ã 財å¸= by LesNaltestake on 06/26/2013 06:14pm
COACH ã¡ã³ãº= æãã¦ããã ãä¸å¯§ã«è§£ãã¦ãè¾¼ãï¼ï¼ãããããã¦ããï¼ï¼ããã³ããããã[url=]ã³ã¼ã æããã[/url]èªåãã©ãå ¥ã ï¼ï¼æªãï¼ ã³ã¼ã ã¢ã¦ãã¬ãã= ã¾ãããåå¿è ã广ããªã ããå®ç§ãª[url=]ã³ã¼ã ã¢ã¦ãã¬ãã[/url]SEOãã§ãªSEOãªã©åå¨ããªå¤§ããå¹ã®ãäºãã¨ãã ã³ã¼ã æããã= ç¾å½¹ã¦æ¬æ°äº¤è¤ãä¼¼åãæã®é³¶ã©ã¤ããããªã§ããã¶ã§ã¯ãªãã«ä»ç«ã¦ã¦[url=] COACH ããã°[/url]ãããã³ã«ç¡¬ããªããèã®ç¡ããããã³ã¯ã®ãReply
Michael Kors PursesPosted by gogofkn on 05/14/2013 06:13pm
I'm looking at your site through Google Chrome and not each of the pictures is displaying right. Did you know about this? toms shoes sale toms shoes ray banReply bxmaobPosted by Mandyyad on 03/30/2013 12:07pm
oakley sunglasses,Younger brother is the sworn brother to sworn to Your brother the Monkey King Torr World ah! Cattle devil looked at the familiar face goes, several hundred years later, finally appeared in front of him, think carefully, after he identified. To know Princess Iron Fan's affair in mind so that his head green cap is still green, that is a huge shame his life! His disgrace in the demon world face the pain of the man than this! Is the little brother ray ban wayfarer ah! Monkey King saw the astonished face cow devil I know Me, then quickly respond cattle devil face with a happy, his Deyizhiyu also did not care about his side the Jingjing Hechun 30th Niang face of surprising and strange, or else he will not respond so readily and excitement. ray ban wayfarer sizes kill you hook her sister-in-law of anti-bone Aberdeen!oakley sunglasses discount,raybansunglassesouty.com/" title="ray ban clubmaster"ray ban clubmaster, Cattle devil confirmed after the fact, immediately toward Monkey King swallowed their anger, waving his big fork, want killed and outlet.Reply
cheap ugg boots vEsr mMghPosted by Suttonjge on 03/09/2013 05:19am
michael kors bags nfaswndu michael kors handbags zgtumsfn michael kors outlet mmxqxuvd michael kors purse ehvtikeg michael kors purses aeevevwh michael kors sale zagkrigz michael kors pccdklkxReply
ghd australia jbraodPosted by Mandyglh on 02/03/2013 05:41pm
7uXdj ugg pas cher vDlh ¥åê©`¥é`¥à ¥Ã¥å° fLsq nike sko 7nQez cheap toms 3aFkr cheap hollister 1tDmb bottes ugg 1oHau longchamp pliage 1mSfv cheap louis vuitton 7aBci michael kors bags 7xTls christian louboutin norge 9cThe 49ers jerseys 2cCko 1nKet GHD Australia 1zHbl Lisseur GHD 9gJwt ugg boots saleReply
ugg boots alwpoy by Mandyqep on 01/26/2013 11:00pm
2gJpu nike outlet mMji Michael Kors outlet wJsu ugg boots 7nKrj monster beats 6dZnx Cheap nfl jerseys 7eXes ugg norge 8tWdk burberry outlet 9vAfu longchamp 8nWyg nike air max 7kEuf cheap ugg boots 0eOvx monster beats 7wSib ugg 1gBsw GHD Australia 4lMjo 5xWuxReply wfhkjt vrzkiuPosted by rootlyJerie on 11/15/2012 03:49am
C++ Callback Demo pfdkxs ptjlclk adgkly doudoune moncler bqeqosd lvusbcgu doudoune moncler femme gozerck ymxmz moncler france cigseoee moncler france cufjsdxb abercrombie vbzjzhbgReply lydwry olymbfPosted by PambInabe on 11/10/2012 11:51pm
C++ Callback Demo mhzdmu smbojfx sefypk polo ralph lauren uk przmcoe chplrgbb ã¢ãã£ã¼ãf50 rxmmgif sgufr air ã¸ã§ã¼ãã³ qtdlfauq ããã·ã£ã ltzrkrqe ã´ã£ãã³ ã¢ãã°ã©ã jcywkkqdReply | http://www.codeguru.com/cpp/cpp/cpp_mfc/callbacks/article.php/c4129/C-Callback-Demo.htm | CC-MAIN-2015-40 | refinedweb | 1,968 | 52.6 |
On 12/07/2011 05:10 PM, Mike Frysinger wrote:> On Friday 02 December 2011 00:16:43 Duncan wrote: >> Longer: Does reiserfs (v3) support xattrs and thus, presumably caps and >> XT_PAX? Kernel reiserfs options suggest yes, but everything I've read >> elsewhere (including gentoo-dev caps project discussions) seems to >> indicate no. Is the "no" simply outdated, since reiserfs xattrs support >> was added relatively late in the game, or is it still correct and I have >> the purpose of those kernel options all wrong, or ??? > i found reiserfs useful in the ext2/ext3 days, but now i find it completely > irrelevant with ext4 > >> If both reiserfs and tmpfs (my $PORTAGE_TMPDIR) support xattrs, both caps >> and XT_PAX should be good to go, correct? > my understanding of libattr/libacl is that the userland interface is FS > independent. once the FS supports xattrs (if the kernel says it does, i'd > believe that), then that should be all you need. > > while i've never tested xattrs on reiserfs (as alluded earlier, i've dropped > all my reiserfs usage in favor of ext4), but i know tmpfs works (once you've > enabled it in the kernel). > -mikeI just tested with reiser3 and xattr works just fine. Just make sure its enabled in the kernel and when you mount the fs use option user_xattr for the user. namespace. -- | https://archives.gentoo.org/gentoo-dev/message/76cf231fc7ed538bfec54c55e18e9cbc | CC-MAIN-2015-18 | refinedweb | 224 | 56.18 |
.
75 Reader Comments
You missed his point. He isn't saying normativly that the the price should be $0, he stating as an issue of fact that prices in a competitive market tend toward the marginal cost of production i.e. $0 in the case of digital goods. Of course, that's is a simplification, since you don't have perfect substitutes but the underlying analysis is sound.
Overall, I'm not much of a fan of long-term copyrights as far as they pertain to low-fixed-cost goods (music would definitely apply as such); as it currently stands, even those on higher-fixed-cost creative goods (like film) are longer than are typically needed to ensure an adequate return on investment (in any case, the discounted value of residual revenues 30-50 years down the road would be practically zero).
In any case, the idea of viewing intellectual property rights less as a legal mechanism and more as a tool to signal creative ownership may be in order. Assume we had much shorter copyrights on music, say, five to ten years. Would that likely stop hard rock fans from buying a new Metallica album? If the price was agreeable to them, probably not. Would it stop Metallica from making new music? Possibly, but only if they thought that long-tail sales were integral to their business strategy (if they are, then I'd suggest another line of work besides music). Would it stop Metallica from being able to sell their back catalog, even after the copyrights expired? No. While true that music would go into the public domain much faster, in the case of my example, Metallica still has an exclusive ability to, well, be Metallica. They can leverage their past content, even without copyright, in ways that nobody else can, thereby giving them a tremendous advantage in the market. It's this aspect that seems to get lost in the intellectual property debate for creative works.
A curious thing is the difference between Anglo-Saxon copyright and continental authorship rights. In continental Europe there are in fact inalienable rights tied to creative works, such as the right to claim and be recognized for the "paternity" of the work and to prevent its defacement (I think similar consequences exist in common law, but not statutory copyright law). However, neither system regards copyright/authorship rights as "property rights" in any way. If they did, any term (long or short) would be nonsensical.
In regulating copyright I hope this perspective isn't lost. Already we see a divorce between the law and the people's view: life + 70 years is far too long a period to subject most works to copyright, as studies have demonstrated that save for very rare exceptions most income generated by a work occurs in the few years after it's initially made available.
Copyright is in a way similar to industrial property rights, although they differ in many aspects. But the difference between the two are increasingly unjustifiable. R&D costs a lot of money, yet the industry recognizes and accepts that it has at most 20 years to recoup the investment. This hasn't stifled innovation. Why should authors be paid for their work even after they're dead then?
Add in the fact that in many cases, if not most, copyright is irrevocably assigned to publishers/producers, and the industry gets an even nastier image among consumers because copyright doesn't benefit the artist, but the faceless corporation that bought the copyright. I think this is one of the reasons why people don't look at copyright infringement as a serious offence....
Nail, meet hammer.
Content creators are entitled to renumeration for their efforts in so much as it encourages them to produce new works. Copyright is not designed to allow someone to live off a single work for the rest of their life. The better the work is though, the more money they will make, but only because more people will be willing to pay for the content while it is still under copyright.
I find it disgusting that many rightsholders - *cough* Hollywood *cough* - are permitted to have copyright terms in excess of 10 years. Even 5 is stretching it in my book. After all, most films etc turn a profit in under a year, with the blockbusters doing this in under a month.
Betcha if Ars called the RIAA members, the members would not give any breakdown of back catalogue/new music sales on a year-by-year basis. I betcha further that the 1990s was a good split, and that back catalogue sales have been dropping while new music sales have stayed the same.
So the fact that I'm not buying new music has nothing to do with copyright, and no amount of copyright reform will help the situation. I am so discouraged that no work of music or literature or software that was produced in my lifetime will ever go out of copyright and be in the public domain that I couldn't care less about "piracy" since being a "pirate" is the only way I will ever get out of print music that the RIAA members will not sell me but still hold copyright on.
Well except that the internet meme that it costs zero to produce digital goods is patently false. Infinite copies theoretically means that the cost of production can be spread to near zero (but never zero since distribution even digitally costs) but as a practical matter needs to be recouped in a shorter time frame than the heat death of the universe because some of the expenses of running a business are on a short time frame.
How many actually do that? Even the Beatles created many works to "live off of".
Today when you are bombarded from music from every direction all day long and from all around the world things are very, very different. All and I will repeat that ALL modern music is regurgitating some thing that some one else has written before. The reasons for that is fairly simple. There are a limited number of notes. Only certain notes sound good together. Useful rhythms are limited as well.
For an example there are 12 notes in the western chromatic scale. Imagine if only the first 12 letters of the alphabet existed and you could only use words that contained those letters. Now imagine stories were say 5 sentences long. And you had the whole history of the English language to draw on and the internet existed. How long until every single 5 sentence story that wasn't gibberish was written?
Another falicy amongst may is that good musicians write and perform mostly based on money. As a musician I can tell you that is very far from the truth.
1. Mentioned in the article, an author spend a year creating his work and should therefore be renumerated for that year of work. I agree. But what defines your cost of living for that year? Surely that should not include entertainment and excesses, no?
2. Copyright laws are quite archaic, having changed little over the years, just amended. Just 20 years ago, distributing music would be costly because one will have to pay logistics and distributions to limited geographies. this is especially true when doing international distributions. But today, digital distribution costs next to nothing (comparatively). so... why are we still paying these costs? Physical distributions has also become increasingly convenient and costs far less than before. So why the prices?
3. Agents and studios used to be the protector of the copyright holders. They take their fees from the copyright holders. But these days, they want a huge cut of the price. Why do consumers have to make them fat? Most of us would recognize that the issues with the prices of artistic works these days aren't because of the artists but the agents and middlemen.
There's no need for the monopoly any more if we can find a way to fairly compensate creators, and government funding of artists is a way to do that. Patronage, donation, grants and funding.
That's kind of my point. Copyright is being used in a way that it was not designed for. The Beatles (to use your example) have made a phenomenal amount of money, but would still have been very rich had the copyright to their songs expired after 10 years. If Paul found out one day that he was running low on cash, he could either create some new stuff, or do what everyone else on the planet does: get a normal job.
EDIT: If a content creator is able to live off their existing work, then there is no incentive for them to produce more - which goes against the original aim of copyright to encourage content creation.
This comment was edited by KeyboardCat on February 08, 2010 17:25
This has nothing to do with the internet. There is a common intuition regarding the value of objects that simply don't take "research and development" into consideration. Put bluntly, your overhead really isn't the consumer's problem. They see a product as having value based on what they can readily observe. If all they see is a $2 bit of plastic, then they will tend to devalue the product. This is the side effect of how people are used to living in the physical world. You are trying to fight against 100 thousand years of habit.
It doesn't help that the music industry has a bad reputation for being inefficient when it comes to the "research and development" side of things that aren't readily apparent in the physical manifestation of the end product.
Great write-up. It really is quite amazing to sit down and think about where we are now (above) vs. where we started from. I think that to look at the subject holistically, the difference between individuals who create content vs. corporations/organizations needs to be part of the discussion. The term for copyrights to expire was, at some point, meant to cover the lifespan of its creator... but with corporations, who have an indefinite lifespan, creating content like movies or purchasing copyrights from content producers it really does become a whole new beast which seems almost irreconcilable with the original intent of copyrights.
Means: Create limited term monopoly
People seem unable to distinguish means from ends. You can disagree with the means without disagreeing as to the ends.
Also, when talking about "rights," does he mean god-given moral rights that exist permanently and forever regardless of the legal or social climate? Or rights under a particular legal regime?
To take this to the logical conclusion, would a monetary return be a valid limitation in copyright law? One that is difficult to manage and set a price point on, but somewhat reflective of the market right now anyway. Games and movies hit "platinum" editions that are cheaper and gradually go through lower pricing points, books become cheap paperbacks (and will in future probably become gradually lower priced ebooks like games). Why not just make a point in the future where, with the copyrighted item having returned enough to satisfactorily compensate the creator/owner, the copyright is removed?
I can't see huge problems with this. People still pay $60 for games, $10 for cinema tickets and $30 for hardcover books despite the common knowledge that they'll be significantly cheaper in just a couple of years. The big problem might be determining what's in and out of copyright, with the easiest way of preventing the system being exploited being stronger copyright law. But hey, I'd be for that as a measure to sway the industries towards such a program. They can't bitch about piracy eating away their profits and if you don't want to pay for media then you can consume the ample amounts available through a legal bittorrent network.
edit: Thinking about it, the big issue would be stopping the government from doing what it's doing with copyright now, forever expanding what it takes to hit "reasonable" returns so that things stay under copyright forever
Why not? If the choice was between creating new works and having no "entertainment" or "excess," ever, or doing anything else and having all the "entertainment" you could eat, why on earth would you choose the former option? So each potential author would choose not to create new works and society as a whole would be much poorer.
Enough incentive that someone who is capable of creating new works is able to ( ie , doesn't starve before it's finished) and wants to (ie, doing so provides a lifestyle that's at least as attractive as the alternatives that may be available) is the key.
We want the content creators to keep producing works that can be disseminated throughout society for the benefit of the people. In exchange for that dissemination, we agree to give the content creator a temporary monopoly on his works so that he can be justly compensated. In addition, should someone violate his temporary monopoly, we'll allow him recourse against that person. All of this is done with the understanding that once that temporary monopoly is extinguished (and it was very temporary in comparison to today's ridiculous term extensions), the creator's work would be freely disseminated for all to enjoy. That is the true purpose of copyright and it disgusts me to see how perverted it has become.
One final side point: no matter what argument one would like to make, digital piracy is not stealing. The legal definition of theft is succinctly put as the permanent deprivation of another's property. The piracy of a digital copy is not, and can never be, permanent deprivation because of the very idea that you're taking one of an infinite number of copies. The content creator has not been deprived of his original work. Piracy is, at most, the loss of a sale, and that presumes that the individual was going to purchase the work in the first place.
This comment was edited by boden on February 08, 2010 18:18
This is in fact not any more miserable than any other job in a competitive market. In long-term equilibrium (the natural state of a competitive market) marginal revenue (price) equals average total cost. Total cost is the sum of marginal costs, fixed costs, and opportunity costs. If you did not meet or exceed these, it would be more profitable to do something else, so you would leave. This is the same situation. The "smallest amount of money they need to keep creating" is the most that they could make from another job. If the best you could make is $50,000 copywriting for some company, then your "profits" from creative work should be about $50,000.
Economically, again, this is in fact true. Sort of. In perfect competition, price is found at the intersection of marginal revenue and marginal cost. If this is beneath your average total costs, tough. In monopolistic competition, which is what creative markets have to be (not monopoly, which is what they are under copyright), marginal revenue is the same as the price at double the quantity (it's complicated). The result of this is that price at a given quantity is greater than the marginal cost, and in equilibrium it equals the average total cost (which includes all those pesky expenses).
Copyright arguably exists to protect society from pirates, who would charge at a much lower price (because they have lower total costs than producers) and thereby force all of the actual producers out of the market. However, at this point, it is doing way more than that, as can be seen simply by noting that creators' revenues generally exceed their costs by a large margin.
The problem I have is that much of the music I listen to is out-of-print meaning that it cannot be bought in a recent format, ie CD, DVD, as a new product. The only recourse is to buy either an overpriced used copy(never really know what condition it will be though), or do without, or as distasteful as it may be to download from the internet.
Copyright was never meant to cause this problem to arise. With longer copyright limits obtaining older works is nigh impossible.
The solid truth. But I do hope with our internet-media friendly society (as it stands) will plant the seed for discussion about this very vital issue to the virtual world. I enjoy reading about both sides, and this article summed it up in a nut shell.
Lots of people mention the Statue of Anne, but very few bother to pay attention to the second part of the Statute, which gave various state functionaries the power to alter the price of books as they saw fit if anyone claimed they were too expensive. The Statute makes it clear that copyright was regarded as a means of reward, but not of undue enrichment. The word 'property', or its analogues, never appears in the Statute, simply because the notion of intellectual property was completely foreign to its framers. Those authors bitching about opt-out requirements for the GB settlement should also note how the Statute made it clear that an affirmative register of copyright was an essential element of the law. The notion that copyright springs into being automatically upon creation is another modern concoction, certainly with regard to Anglo-Saxon tradition.
The situation is slightly different under civil law (eg France), which has a few examples in its history of authors winning property rights for their work. Despite this, French legal history carries a tradition in which it is the editor or publisher who receives the greatest protection. There was a concerted effort in the late 1930's under Jean Zay to replace any notion of intellectual property with the definition of authors as 'intellectual workers' who were granted a licence which limited the rights of the editors who published their work. This law was actually supported by many authors of the time, but failed to be passed because of opposition from the editorial lobby.
The simple fact is that 'Intellectual Property', in the form we know it today, has little or no root in legal history before the twentieth century.
While its true the powers that be will never listen to common sense, so do what i do - keep 'pirating' all their works and force them to change. They will not listen to reason but respect an opposing force.
Convert everyone around you to the free side
The truth is, most of the new music and films are garbage, but i still download it because i can share it more effectively to people who want the crap (i'm on a 100mbps connection).. a lot of the music i share i have never listened to even once, and never will.
I'm aware that this is what economic theory says, but it's quite obviously not true that (as in the Yglesias example) price should equal marginal cost and marginal cost is zero in some kind of strict equation. If I know in advance that my marginal cost for distributing a book will in fact be zero, and if I believe that the market will in fact immediately force the price of each copy to zero, I'm not going to spend a year writing the book (assuming that I need money to live on; those who are independently wealthy can still write, as can those who have the time and energy to create while holding down some other job).
Though I suspect that I disagree with DeLong about almost everything, he does make some good points about precisely this topic in an old essay:
yesno, you are my hero. Just by reading some of these comments, I can tell that a lot of arguments regarding these matters are rooted in personal opinions on who should get paid, for how long, and what for. I think that before we can reach a successful medium for copyright we are going to have to step back and really re-evaluate the purpose and the big picture of copyright in the U.S. today.
Personally, I think this would be a bad idea as it would remove the incentive to produce good works. I believe that better works should result in the likelihood of greater profit. We still need to correlate potential return (i.e. profit) with benefit to society (i.e. quality) for the copyright system to work as intended.
This kind of price reduction is just market forces in play. People are less willing to pay as much for old stuff as they are new stuff. This is mainly because, as time goes on, more people will have consumed the work and are therefore less likely to want to spend as much on repeat consumptions. Furthermore, after-markets like second-hand sales serve to drive down the price that old works can command.
1. there is no such thing as intellectual property.
2. how can you consume something thats not physical?
who defines what is "good"?
Easy. You "consume" a film by watching it. You "consume" a song by listening to it. You "consume" a book by reading it. Etc, etc.
Good is an entirely subjective term that is defined by the content creator. What I meant was that a content creator is likely to produce things they think are good (i.e. be "consumed" by the most people) if this course of action will net them a greater reward. If the quality of their work has no bearing on their return then they will just produce any old stuff.
nope, as i can experience each "work" multiple times if i so choose. If i consumed it, it would be gone after i experienced it one time, and would have to be created again.
When copyright concerns were being actively discussed in the past, it has been at times where it took long hours and a lot of effort to create singular works.
Prior to the invention of telephones, writing music for any instrument could take months to years of work to produce a single song because you had to tune and retune instruments, had to write lyrics that would please the public ear, and all of this was done by a single artist or maybe a small group. You didn't have companies. You didn't have computers with software that could reproduce sounds and melodies without having to tune instruments. And if your music was requested to be heard, you had to travel long distances to perform live or to send a hand-scribed copy of your music to another artist to perform.
Then travel got easier, there was television and radio to bring out more avenues to spread the music, and newer insturments are easier to tune- you don't have to find a specialist to get a horse hair to replace a broken string- just run down to the local store!
Now we have the modern world, full of instant telecommunication and telepresence. You can copy a single song onto an infinite amount of digital space and only slightly less numerable phsyical discs that can contain many songs.
All of today's technology has only helped artists create more works of art and music and books and culture. They can produce is faster, spread it farther, and the higher population of the world makes it more profitable. All the while- they are still being protected as if their work of "art" took a year to produce (which in some cases it might). A band can spit out a 12 song cd in a year, where 100 years ago it could take a year to produce 1, maybe 2 songs. An Author could type up a book, have it edited, published, and printed all in the same time... 100 years ago the same feat could take multiple years and would never reach the amount of people it can reach today.
Look at many of these 'artists' and 'actors' and look at their lifestyle. The extravagance and excess is outright over the top in many cases. This is all done under the pretense that "digital piracy is stealing money away from these artists"... Even if the MPAA and RIAA are taking profit away from artists, how are some of these artists still doing so well?
I do admit, there are many artists who struggle with original content they developed on their own. Piracy does hurt these smaller artists more than the big names because they have a much smaller market and do not have the publicity of the big names... Guess who the MPAA/RIAA are trying to "protect"? The big artists, the ones with 12 cars and 3 mansions and a personal jet. The marketable songs and names that sell the most. The MPAA/RIAA are businesses and are out to make money.
Part of the problem is that copyright is being treated like a single song or book or poem is all the artist can produce in a reasonable time, that the proceeds from marketing that singular piece of culture should support the artist financially... Problem is, modern artists can churn out these songs/books/poems at a fraction of the time it would take in previous years. While most of it will never be as marketable as a Top 10 hit, or whatever, it is still treated as if it should support the life of the artist.
An artist produces 10 songs, 1 song makes it to the Top 10 and is heard by millions across the world (maybe billions). The proceeds of that 1 song would be enough to keep a small family out of poverty for a year. Yet the artist also gets proceeds from the other 9 songs.
Then you have extras, like concerts and tours, and of course merchandise. I have no problem with what an artist makes at their public appearances or off of merchandise... they've put the work into performing in public or into designing clothes (lol), they can get a fair share for it.
But the effective income from producing music is inflated for this modern age.
I'm not saying writing music and recording music isn't hard, I'm sure it's not any less difficult than servicing computers, engineering jet engines, or chasing down drug dealers in a bad neighborhood.
But the big artists can make multitudes more than any other profession.
It also has a big rate of failure and a short lifespan (music artist marketable span is about 5-10years).
I have no issue with an artist making a few hundred grand a year... lawyers, doctors, engineers can and do. But the biggest names make millions. That's on the order of 100-1000 times the yearly salary of any profession requiring 10 years+ of education to attain. No individual lives long enough to need such excess, regardless of their age of retirement from "the industry".
Then you throw a middle-man in the retail scheme that marks up that "price" by their margin to make it even more rediculous. Then you throw in the middle-man to produce a cd, marking it up again. Then you throw in the retailer to mark it up another %. Digital distribution cuts out the retailer and manufacturer.
I'll add this in: if the bigger artists were made a bit less marketable through copyright law adjustments, perhaps the smaller "starving" artists would have a bit more room to step into the market and profit. I have no problem with someone profiting from their intellectual creation and our produced culture.
But the market is very top-heavy right now.
Having lifetime copyrights just allows artists to create small amounts of work and then profit for the rest of their lives while still maintaining extravagant lifestyles. Who else do you know that can retire by the age of 30 and have 3+ large houses, more than 3 cars, and never have to worry about money for the rest of their life?
I was using the term "consume" as you are using the term "experience". I don't want to get caught up in an argument on semantics, so I am willing to concede that I should have used the term "experience" instead of "consume".
Or, to put it another way,
Then you completely missed the point, because that's not what 'consume' means.
con⋅sum⋅er [kuhn-soo-mer] Show IPA
–noun
1. a person or thing that consumes.
2. Economics. a person or organization that uses a commodity or service.
3. Ecology. an organism, usually an animal, that feeds on plants or other animals.
You misused the verb consume when referring to consumers right here:
Keyboard cat was getting at the fact that a consumer, in the economic sense, "uses" goods and services. If intellectual property was a good or service, then you can use it as a consumer, just as you would any other product. You can't argue the invalidity of intellectual property on the basis that you can't "consume" it, because you typically don't consume most economic goods and services - you use them.
Anyway, piracy quite clearly is stealing, just a subset of stealing that occurs on the high seas...
You must login or create an account to comment. | http://arstechnica.com/tech-policy/2010/02/contextualizing-the-copyright-debate-reward-vs-creativity/?comments=1&post=72965 | CC-MAIN-2014-23 | refinedweb | 4,936 | 68.2 |
Hi
I have installed AppStudio for ArcGIS on Windows from the beta 4 download (v 10.2.5.1079), and am trying to add a WebView to an app but am receiving the error message at build that;
module "QtWebKit" is not installed
import QtWebKit 3.0
I had been following the samples from Qt for the QML WebKit Web View. I also tried with the Qt Web View 1.0 available from 5.5 but this doesn't seem available either.
The what's new indicates that the Web View is available but I'm not sure which we should be using. Can anyone provide some advise and an example that will work with AppStudio for ArcGIS please?
I tried both the windows x64 and x86 versions.
Thanks
Andrew
I tried again with qt webview 1.0 which does run however I don't see a page in the app.
//------------------------------------------------------------------------------
Has any body successfully setup a web view with app studio? | https://community.esri.com/thread/166846-qt-web-view-module-not-installed | CC-MAIN-2019-39 | refinedweb | 162 | 84.07 |
Scatter Plots in Python
How to make scatter plots.
Scatter plot with Plotly Express¶
Plotly Express is the easy-to-use, high-level interface to Plotly, which operates on a variety of types of data and produces easy-to-style figures.
With
px.scatter, each data point is represented as a marker point, whose location is given by the
x and
y columns.
# x and y given as array_like objects import plotly.express as px fig = px.scatter(x=[0, 1, 2, 3, 4], y=[0, 1, 4, 9, 16]) fig.show()
# x and y given as DataFrame columns import plotly.express as px df = px.data.iris() # iris is a pandas DataFrame fig = px.scatter(df, x="sepal_width", y="sepal_length") fig.show()
import plotly.express as px df = px.data.iris() fig = px.scatter(df, x="sepal_width", y="sepal_length", color="species", size='petal_length', hover_data=['petal_width']) fig.show()
import plotly.express as px import numpy as np t = np.linspace(0, 2*np.pi, 100) fig = px.line(x=t, y=np.cos(t), labels={'x':'t', 'y':'cos(t)'}) fig.show()
import plotly.express as px df = px.data.gapminder().query("continent == 'Oceania'") fig = px.line(df, x='year', y='lifeExp', color='country') fig.show()
Scatter and line plot with go.Scatter¶. The different options of
go.Scatter are documented in its reference page.
Simple Scatter Plot¶
import plotly.graph_objects as go import numpy as np N = 1000 t = np.linspace(0, 10, 100) y = np.sin(t) fig = go.Figure(data=go.Scatter(x=t, y=y, mode='markers')) fig.show()
Line and Scatter Plots¶
Use
mode argument to choose between markers, lines, or a combination of both. For more options about line plots, see also the line charts notebook and the filled area plots notebook.()
Bubble Scatter Plots¶
In bubble charts, a third dimension of the data is shown through the size of markers. For more examples, see the bubble chart notebook
import plotly.graph_objects as go fig = go.Figure(data=go.Scatter( x=[1, 2, 3, 4], y=[10, 11, 12, 13], mode='markers', marker=dict(size=[40, 60, 80, 100], color=[0, 1, 2, 3]) )) fig.show()
import plotly.graph_objects as go import numpy as np t = np.linspace(0, 10, 100) fig = go.Figure() fig.add_trace(go.Scatter( x=t, y=np.sin(t), name='sin', mode='markers', marker_color='rgba(152, 0, 0, .8)' )) fig.add_trace(go.Scatter( x=t, y=np.cos(t), name='cos', marker_color='rgba(255, 182, 193, .9)' )) # Set options common to all traces with fig.update_traces fig.update_traces(mode='markers', marker_line_width=2, marker_size=10) fig.update_layout(title='Styled Scatter', yaxis_zeroline=False, xaxis_zeroline=False) fig.show()
import plotly.graph_objects as go import pandas as pd data= pd.read_csv("") fig = go.Figure(data=go.Scatter(x=data['Postal'], y=data['Population'], mode='markers', marker_color=data['Population'], text=data['State'])) # hover text goes here fig.update_layout(title='Population of USA States') fig.show()
import plotly.graph_objects as go import numpy as np fig = go.Figure(data=go.Scatter( y = np.random.randn(500), mode='markers', marker=dict( size=16, color=np.random.randn(500), #set color equal to a variable colorscale='Viridis', # one of plotly colorscales showscale=True ) )) fig.show()() | https://plotly.com/python/line-and-scatter/ | CC-MAIN-2020-50 | refinedweb | 538 | 56.21 |
# First we make a function that splits a string p up into a set of # non-overlapping, non-empty substrings. def partition(p, pieces=2): assert len(p) >= pieces base, mod = len(p) / pieces, len(p) % pieces idx = 0 ps = [] modAdjust = 1 for i in xrange(0, pieces): if i >= mod: modAdjust = 0 newIdx = idx + base + modAdjust ps.append(p[idx:newIdx]) idx = newIdx return ps
def bmApproximate(p, t, k, alph="ACGT"): """ Use the pigeonhole principle together with Boyer-Moore to find approximate matches with up to a specified number of mismatches. """ assert len(p) >= k+1 ps = partition(p, k+1) # split p into list of k+1 non-empty, non-overlapping substrings off = 0 # offset into p of current partition occurrences = set() # note we might see the same occurrence >1 time for pi in ps: # for each partition # NOTE: I haven't given the implementation for the BMPreprocessing object. # It implements the Boyer-Moore skipping rules as we discussed in class. bm_prep = BMPreprocessing(pi, alph=alph) # BM preprocess the partition for hit in bm_prep.match(t)[0]: if hit - off < 0: continue # pattern falls off left end of T? if hit + len(p) - off > len(t): continue # falls off right end? # Count mismatches to left and right of the matching partition nmm = 0 for i in range(0, off) + range(off+len(pi), len(p)): if t[hit-off+i] != p[i]: nmm += 1 if nmm > k: break # exceeded maximum # mismatches if nmm <= k: occurrences.add(hit-off) # approximate match off += len(pi) # Update offset of current partition return sorted(list(occurrences))
bmApproximate('needle', 'needle noodle nargle', 2, alph='abcdefghijklmnopqrstuvwxyz ')
[0, 7] | http://nbviewer.jupyter.org/github/BenLangmead/comp-genomics-class/blob/master/notebooks/CG_BoyerMooreApprox.ipynb | CC-MAIN-2018-51 | refinedweb | 275 | 50.57 |
Description
Background
The knight is getting bored of seeing the same black and white squares again and again and has decided to make a journey around?
Problem
Find a path such that the knight visits every square once. The knight can start and end on any square of the board.
Input
The input begins with a positive integer n in the first line. The following lines contain n test cases. Each test case consists of a single line with two positive integers p and q, such that 1 <= p * q <= 26. This represents a p * q chessboard, where p describes how many different square numbers 1, . . . , p exist, q describes how many different square letters exist. These are the first q letters of the Latin alphabet: A, . . .
Output
The output for every scenario begins with a line containing “Scenario #i:”, where i is the number of the scenario starting at 1. Then print a single line containing the lexicographically first path that visits all squares of the chessboard with knight moves followed by an empty line. The path should be given on a single line by concatenating the names of the visited squares. Each square name consists of a capital letter followed by a number.
If no such path exist, you should output impossible on a single line.
Sample Input
3 1 1 2 3 4 3
Sample Output
Scenario #1: A1 Scenario #2: impossible Scenario #3: A1B3C1A2B4C2A3B1C3A4B2C4
Solution below . . .
class KnightTour { static int H, W; /* * Check if x,y are within the bounds of an H*W chessboard */ static boolean isSafe(int x, int y, int sol[][]) { return (x >= 0 && x < H && y >= 0 && y < W && sol[x][y] == -1); } /* * A utility function to print solution in the desired format */ static void printSolution(int sol[][]) { String[] solution = new String[H * W]; /* * The solution matrix (sol[][]) contains the solution as a * sequence of numbers from 0 to H * W - 1, showing the order * in which each square is visited. We need to convert that * to the output format specified by the problem description. */ for (int x = 0; x < H; x++) { for (int y = 0; y < W; y++) { char row = (char) ('A' + y); int col = x + 1; solution[sol[x][y]] = "" + row + col; } } StringBuilder sb = new StringBuilder(); for (String s : solution) { sb.append(s); } System.out.println(sb.toString()); } /* * This function solves the Knight Tour problem using backtracking. Backtracking * is not the most efficient solution but it does let us control the order in * which squares are visited. This is important because there can be multiple * solutions and the problem asks for the lexicographically first path. */ static boolean solveKT(int test) { int sol[][] = new int[H][W]; /* Initialization of solution matrix */ for (int x = 0; x < H; x++) for (int y = 0; y < W; y++) sol[x][y] = -1; /* * xMove[] and yMove[] define next moves for knight. xMove[] is for next value of * x coordinate yMove[] is for next value of y coordinate. The move arrays are * ordered so as to visit squares in lexicographical order. */ int xMove[] = { -1, 1, -2, 2, -2, 2, -1, 1 }; int yMove[] = { -2, -2, -1, -1, 1, 1, 2, 2 }; // Since the Knight is initially at the first square sol[0][0] = 0; System.out.println("Scenario #" + test + ":"); /* * Start from 0,0 and explore all tours using solveKTUtil() */ if (!solveKTUtil(0, 0, 1, sol, xMove, yMove)) { System.out.println("impossible"); return false; } else printSolution(sol); return true; } /* * A recursive utility function to solve Knight Tour problem */ static boolean solveKTUtil(int x, int y, int movei, int sol[][], int xMove[], int yMove[]) { int k, next_x, next_y; if (movei == H * W) return true; /* * Try all next moves from the current coordinate x, y */ for (k = 0; k < 8; k++) { next_x = x + xMove[k]; next_y = y + yMove[k]; if (isSafe(next_x, next_y, sol)) { sol[next_x][next_y] = movei; if (solveKTUtil(next_x, next_y, movei + 1, sol, xMove, yMove)) return true; else sol[next_x][next_y] = -1;// backtracking } } return false; } public static void main(String args[]) { Scanner sc = new Scanner(System.in); int T = sc.nextInt(); int test = 1; while (test <= T) { H = sc.nextInt(); W = sc.nextInt(); solveKT(test); if (test < T) System.out.println(); test++; } } } | http://eppsnet.com/2018/06/competitive-programming-poj-2488-a-knights-journey/ | CC-MAIN-2018-51 | refinedweb | 695 | 59.43 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.