text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Net::BitTorrent::Torrent - Class Representing a Single .torrent File
Net::BitTorrent::Torrent objects are typically created by the
Net::BitTorrent class.
Standalone
Net::BitTorrent::Torrent objects can be made for informational use. See new ( ) and queue ( ).
new ( { [ARGS] } )
Creates a
Net::BitTorrent::Torrent object..
Resume
The filename used to gather and store resume data.
This is an optional parameter.
No default. Without a defined resume file, resume data will not be written on calls to save_resume_data ( ) without a
PATH parameter.
Status
Initial status of the torrent. This parameter is ORed with the loaded and queued (if applicable) values.
For example, you could set the torrent to automatically start after hashcheck with
{ [...] Status => START_AFTER_CHECK, [...] }.
To import all supported statuses into your namespace, use the
status keyword.
This is an optional parameter.
Default: 1 (started)
See also: status ( )
Note: This is alpha code and may not work correctly.
true when.
pause ( )
Pauses an active torrent without closing related sockets.
See also: status ( ), stop ( ), start ( )
peers ( )
Returns a list of remote peers related to this torrent. ( [ RAW ] )
Returns the bencoded metadata found in the .torrent file. This method returns the original metadata in either bencoded form or as a raw hash (if you have other plans for the data) depending on the boolean value of the optional
RAW parameter.
resume_path ( )
Returns the default path used to store resume data. This value is set in the
Resume parameter to new.
save_resume_data ( [ PATH ] )
One end of Net::BitTorrent's resume system. This method writes the data to the file specified in the call to new( ) or (if defined) to the
PATH parameter.
See also: Resume API and How do I quick Resume a .torrent Session Between Client Sessions? in Net::BitTorrent::Notes
size ( )
Returns the total size of all files listed in the .torrent file.
status ( )
Returns the internal status of this
Net::BitTorrent::Torrent object. States are bitwise
AND values of...
For example, a status of
201 implies the torrent is
QUEUED | LOADED | CHECKED | STARTED.
When torrents have the a status that indicates an error, they must be restarted (if possible). The reason for the error may be returned by error ( ).
Import the
:status tag ( )
trackers
Returns a list of all Net::BitTorrent::Torrent::Tracker objects related to the torrent.
uploaded ( )
Returns the total amount uploaded to remote peers since the client started transferring data related to this .torrent.
See also: downloaded ( )
as_string ( [ VERBOSE ] )
Returns a 'ready to print' dump of the object's data structure. If called in void context, the structure is printed to
STDERR.
VERBOSE is a boolean value.
When triggered, per-torrent callbacks receive two arguments: the
Net::BitTorrent::Torrent object and a hashref containing pertinent information. Per-torrent callbacks also trigger client-wide callbacks when the current torrent is queued.
Per-torrent callbacks are limited to tracker-, piece-, and file-related events. See Net::BitTorrent for client-wide callbacks.. | http://search.cpan.org/~sanko/Net-BitTorrent-0.052/lib/Net/BitTorrent/Torrent.pm | CC-MAIN-2014-52 | refinedweb | 480 | 51.24 |
Jun 17 2021 08:26 AM
Jun 17 2021 08:26 AM
I am trying to use a video file source as input into Percept, but this simple C++ code using OpenCV doesn’t work on the Percept (in the AzureEyeModule docker container). It is unable to open any video file (.mp4 or .avi or .mkv etc.). Are there any libraries missing to be able to read a video file?
Code snippet:
cv::VideoCapture cap("/tmp/app/build/pedestrians_animals.mp4");
if (!cap.isOpened()) {
std::cout << "ERROR! Unable to open file\n";
}
else{
std::cout<<"opened the file!!\n";
}
I have also tried to update ~\src\azure-percept-advanced-development\azureeyemodule\app\model\ssd.cpp, I added the following line to read an inputvideofile which is set to “filepath/filename.mp4”. No matter what, OpenCV just doesn’t want to read the inputvideofile.
Code snippet:
if (!this->inputvideofile.empty()){
// Specify the input video file as the input to the pipeline, and start processing
pipeline.setSource(cv::gapi::wip::make_src<cv::gapi::wip::>(inputvideofile));
}
else
{
// Specify the Azure Percept's Camera as the input to the pipeline, and start processing
pipeline.setSource(cv::gapi::wip::make_src<cv::gapi::mx::Camera>());
}
return pipeline;
If this doesn't work on the Percept due to OpenCV/library issues, is there a way for LVA Edge media graph to specify input source as a video file instead of RTSP?
Jun 17 2021 11:09 AM
Jun 17 2021 11:09 AM
Hi @amitmarathe - thanks for this question. We're looking into this and will get back with a solution or any additional questions we have for you.
Thanks!
Jun 23 2021 05:20 AM
Jun 23 2021 05:20 AM
@amitmarathe If using Python is an option for you, you may want to look into using the "
import cv2 import numpy as np cap = cv2.VideoCapture("/path/to/your_video_file.mp4") if not cap.isOpened(): print("Could not open file") frame_count = 0 # Read until video is completed while(cap.isOpened()): # Capture frame-by-frame ret, frame = cap.read() if ret == True: frame_count += 1 else: break cap.release() print(f"Frames: {frame_count}")
Jul 02 2021 05:24 PM
Jul 02 2021 05:24 PM
@amitmarathe - I know that you've got an active email thread with our engineers covering these questions, but let me know if there is anything that isn't being addressed. Love the work you are doing with the dev kit! :)
Jul 28 2021 10:05 AM
Jul 28 2021 10:05 AM | https://techcommunity.microsoft.com/t5/iot-devices/percept-reading-input-video-source-instead-of-rtsp/td-p/2458648 | CC-MAIN-2022-40 | refinedweb | 418 | 64.81 |
.
One window of opportunity to do exactly that has opened in the past few years. Clojure, a modern variant of Lisp that runs on the Java virtual machine (JVM), has been taking the programming world by storm. It's a real Lisp, which means that it has all of the goodness you would want and expect: functional programming paradigms, easy use of complex data structures and even such advanced facilities as macros. Unlike other Lisps, and contributing in no small part to its success, Clojure sits on top of the JVM, meaning that it can interoperate with Java objects and also work in many existing environments.
In this article, I want to share some of my experiences with starting to experiment with Clojure for Web development. Although I don't foresee using Clojure in much of my professional work, I do believe it's useful and important always to be trying new languages, frameworks and paradigms. Clojure combines Lisp and the JVM in just the right quantities to make it somewhat mainstream, which makes it more interesting than just a cool language that no one is really using for anything practical.
Clojure Basics
Clojure, as I mentioned above, is a version of Lisp that's based on the JVM. This means if you're going to run Clojure programs, you're also going to need a copy of Java. Fortunately, that's not much of an issue nowadays, given Java's popularity. Clojure itself comes as a Java archive (JAR) file, which you then can execute.
But, given the number of Clojure packages and libraries you'll likely want to use, you would be better off using Leiningen, a package manager for installing Clojure and Clojure-related packages. (The name is from a story, "Leiningen and the Ants", and is an indication of how the Clojure community doesn't want to use the established dependency-management system, Ant.) You definitely will want to install Leiningen. If your Linux distribution doesn't include a modern copy already, you can download the shell script from.
Execute this shell script, putting it in your PATH. After you download the Leiningen jarfile, it will download and install Leiningen in your ~/.lein directory (also known as LEIN_HOME). That's all you need in order to start creating a Clojure Web application.
With Leiningen installed, you can create a Web application. But in
order to do that, you'll need to decide which framework to use.
Typically, you create a new Clojure project with
lein new, either
naming the project on which you want to work (
lein new
myproject),
or by naming the template you wish to copy and then the name of the
project (
lein new mytemplate myproject). You can get a list of
existing templates by executing
lein help new or by looking at the site, a repository for Clojure jarfiles and libraries.
You also can open an REPL (read-eval-print loop) in order to communicate directly with Clojure. I'm not going to go into all the details here, but Clojure supports all the basic data types you would expect, some of which are mapped to Java classes. Clojure supports integers and strings, lists and vectors, maps (that is, dictionaries or hashes) and sets. And like all Lisps, Clojure indicates that you want to evaluate (that is, run) code by putting it inside parentheses and putting the function name first. Thus, you can say:
(println "Hello") (println (str "Hello," " " "Reuven")) (println (str (+ 3 5)))
You also can assign variables in Clojure. One of the important things to know about Clojure is that all data is immutable. This is somewhat familiar territory for Python programmers, who are used to having some immutable data types (for example, strings and tuples) in the language. In Clojure, all data is immutable, which means that in order to "change" a string, list, vector or any other data type, you really must reassign the same variable to a new piece of data. For example:
user=> (def person "Reuven") #'user/person user=> (def person (str person " Lerner")) #'user/person user=> person "Reuven Lerner"
Although it might seem strange to have all data be immutable, this tends to reduce or remove a large number of concurrency problems. It also is surprisingly natural to work with given the number of functions in Clojure that transform existing data and the ability to use "def" to define things.
You also can create maps, which are Clojure's implementation of hashes or dictionaries:
user=> (def m {:a 1 :b 2}) #'user/m user=> (get m :a) 1 user=> (get m :x) nil
You can get the value associated with the key "x", or a default if you prefer not to get nil back:
user=> (get m :x "None") "None"
Remember, you can't change your map, because data in Clojure is immutable. However, you can add it to another map, the values of which will override yours:
user=> (assoc m :a 100) {:a 100, :b 2}
One final thing I should point out before diving in is that you
can (of course) create functions in Clojure. You can create an
anonymous function with
fn:
(fn [first second] (+ first second))
The above defines a new function that takes two parameters and adds
them together. However, it's usually nice to put these into a named
function, which you can do with
def:
user=> (def add (fn [first second] (+ first second))) #'user/add user=> (add 5 3) 8
Because this is common, you also can use the
defn macro, which
combines
def and
fn together:
user=> (add 5 3) 8 user=> (defn add [first second] (+ first second)) #'user/add user=> (add 5 3) 8. | http://www.linuxjournal.com/content/intro-clojure-web?page=0,0&quicktabs_1=1 | CC-MAIN-2017-43 | refinedweb | 948 | 54.66 |
When to Use Vertical Stacked Bar Charts
- Jun 11 • 7 min read
- Key Terms: vertical stacked bar charts
Vertical bar charts are useful to illustrate sizes of data using different bar heights. In each vertical bar, we could show a stack of amounts by a certain category type. This type of chart is a vertical stacked bar chart. individual or subscriber. With these two categories for types of riders, we can see a breakdown of each compared to the monthly count of rides with a vertical stacked bar chart.
Import Modules
import pandas as pd import matplotlib.pyplot as plt %matplotlib inline Monthly Scooter Count Ride Data by Account Type Data
df = pd.DataFrame({'month_year': month_list, 'count_scooter_rides': monthly_count_rides, 'count_scooter_rides_subscription': monthly_count_rides_subscribers, 'count_scooter_rides_individual': monthly_count_rides_individual})
df.set_index('month_year')[['count_scooter_rides_subscription', 'count_scooter_rides_individual']].plot(kind='bar', figsize=(12, 10), stacked=True) plt.xticks(rotation=30) plt.title("Count of Rides Per Month Broken Down by Account Type", fontsize=18, y=1.015) plt.ylabel("Count of Rides", fontsize=14, labelpad=15); plt.xlabel("Date (month year format)", fontsize=14, labelpad=15)
<matplotlib.text.Text at 0x10ebb72e8>
Explanation of Scooter Plot
From a total rides perspective, we can see scooter rides increased greatly month over month from May 2017 to November 2017. However, there was a dip in the winter months, and then a drastic increase in the warmer months of March 2018 onwards.
Looking at the account types, it's there are two account types and they each had a share of rides every month. The far majority of rides were individual account riders, not subscribers.
Example: Bike Sales by Bike Type
The example above was a simplified example of a stacked bar chart. Stacked bar charts are often more beneficial for showing the breakdown of greater than two categories.
Let's say we run a bike shop in California. We sell 5 types of bikes: hybrid, mountain, racing, electric and bmx. We'd like to get a high-level view of sales per month over the past year and at a high-level see sales by bike type too.
A sample of the original sales data would be in the format:
To get our data in the intended format mentioned above, we'd need to group this data by month-year, then group by bike type, and then aggregate by sum of sale price amounts.
We'd get data in the format below:
Month Year | Hybrid Sales | Mountain Sales | Racing Sales | Electric Sales | BMX Sales --- | --- | --- May 2017 | 75000 | 9000 | 18000 | 13000 | 5400 June 2017 | 76500 | 8500 | 15000 | 14600 | 4000 July 2017 | 66000 | 5350 | 13500 | 15000 | 3500
Generate Monthly Bike Sales Data by Bike Type
hybrid_bike_sales = [75000, 76500, 66000, 59000, 51000, 28000, 23500, 42000, 18000, 14000, 38000, 55000, 62000] mountain_bike_sales = [9000, 8500, 5350, 4800, 3900, 3500, 3100, 6300, 0, 0, 1800, 7800, 8000] racing_bike_sales = [18000, 15000, 13500, 14000, 12000, 8000, 9000, 15000, 3000, 1400, 7000, 10000, 13000] electric_bike_sales = [13000, 14600, 15000, 22000, 19800, 7000, 5600, 8200, 1300, 3800, 6600, 7800, 12000] bmx_bike_sales = [5400, 4000, 3500, 1200, 1500, 0, 0, 2800, 0, 800, 500, 1800, 2300]
Plot Bike Sales by Bike Type
df2 = pd.DataFrame({'month-year': month_list, 'hybrid': hybrid_bike_sales, 'mountain': mountain_bike_sales, 'racing': racing_bike_sales, 'electric': electric_bike_sales, 'bmx': bmx_bike_sales})
df2.set_index('month-year').plot(kind='bar', stacked=True, figsize=(12, 10)) plt.xticks(rotation=30) plt.ylabel("Bike Sales (U.S. dollars)", fontsize=14, labelpad=15) plt.xlabel("Month Year", fontsize=14, labelpad=15) plt.title("Monthly Bike Sales by Bike Type Over Time", fontsize=18, y=1.02);
Explanation of Bike Sales Plot
From a total sales perspective per month, we can see we had very hgih sales in the early spring months of 2017, then a decline month over month heading into November, a small burst in December likely due to holiday sales, and then a steady rise again in the early summer months of 2018.
We had far more sales in May of 2017 compared to May of 2018. So as a whole, we can infer there may be a potential decline in sales year over year. We'd want more data to show the trend over multiple years to make a firm conclusion on this inference.
Hybrid sales always make up largest portion of bike sales by type.
BMX sales account for the smallest sales by type ever month, and there were only a few bmx bike sales in the winter months. | https://dfrieds.com/data-visualizations/when-use-vertical-stacked-bar-charts | CC-MAIN-2019-26 | refinedweb | 728 | 69.21 |
--- "J.Pietschmann" <[EMAIL PROTECTED]> wrote: > Finn Bock wrote: > > An extension mechanism where I can put an > unmodified fop.jar and > > myextension.jar on the CLASSPATH and have it work > is a defining issue to > > me. > > That's how it should work. The code build into the > FOP core > should only validate elements from the fo namespace > and > attributes from no namespace,
Advertising
Provided the extension namespace isn't already hardcoded into FOP (like the fox: one). > and call validation > for elements > and attributes from other namespaces in roder to > give them a > chance to validate themselves. > Errr, elements can't "validate themselves", because the validity of an element is defined only by the parent. The recommendation declares, via the content models, which children are valid for each parent, not vice-versa. This logic is naturally (and much more cleanly) stored with the parent in the OO world, allowing Finn's block.java to have different child nodes from FOP's block.java. Furthermore, such a child-level validation would require the kid to be instantiated first. vCN() stops instantiation of the kid from ever occurring if it would be invalid to begin with. Glen | https://www.mail-archive.com/fop-dev@xml.apache.org/msg18071.html | CC-MAIN-2018-17 | refinedweb | 194 | 56.05 |
Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup. One of the aspects of Jamstack is, it is practically serverless. To put it more clearly, we do not maintain any server-side applications. Rather, sites use existing services (like email, media, payment platform, search, and so on).
Did you know, 70% - 80% of the features that once required a custom back-end can now be done entirely without it? In this article, we will learn to build a Jamstack e-commerce application that includes,
- Stripe: A complete payment platform with rich APIs to integrate with.
- Netlify Serverless Lambda Function: Run serverless lambda functions to create awesome APIs.
- Gatsbyjs: A React-based framework for creating prebuilt Markups.
What are we building today?
I love Cats 🐈. We will build a pet store app called
Happy Paws for our customers to purchase some adorable Cats. Customers can buy cats by adding their details to the cart 🛒 and then finally checkout by completing the payment process 💳.
Here is a quick glimpse of the app we intend to build(This is my first ever youtube video with voice. 😍)
TL;DR
In case you want to look into the code or try out the demo in advance, please find them here,
- GitHub Repository => Source Code. Don't forget to give it a star if you find it useful.
- Demo
Please note, Stripe is NOT available in all countries. Please check if Stripe is available in your country. The Demo setup uses a test Stripe account created from the India region. Hence, it is guaranteed to work when accessed from India, and I hope it works elsewhere. However, that doesn't stop you from following the rest of the tutorial.
Create the Project Structure
We will use a Gatsby starter to create the initial project structure. First, we need to install the Gatsby CLI globally. Open a command prompt and run this command.
npm install -g gatsby-cli
After this, use this command to create a gatsby project structure,
gatsby new happy-paws
Once done, you will see a project folder called happy-paws has been created. Try these commands next,
cd happy-paws gatsby develop
You should be able to access the interface using
Setup Netlify Functions
To set up netlify functions, stop the gatsby develop command if running. Install the
netlify-cli tool to run these functions locally.
npm install -g netlify-cli
Create a file called
netlify.toml at the root of the project folder with the following content,
[build] functions = "functions" [[redirects]] from = "/api/*" to = "/.netlify/functions/:splat" status = 200
The above file will tell the Netlify tool to pick up the functions from the
functions folder at the build time. By default, netlify functions will be available as an API and accessible using a URL prefix,
/.netlify/functions. This may not be very user friendly. Hence we want to use a redirect URL as,
/api/*. It means, a URL like
/.netlify/functions/getProducts can now be accessed like,
/api/getProducts.
Next, create a folder called
functions at the root of the project folder and create a
data folder inside it. Create a file called
products.json inside the
data folder with the following content.
[ { "sku": "001", "name": "Brownie", "description": "She is adorable, child like. The cover photo is by Dorota Dylka from Unsplash.", "image": { "url": " "key": "brownie.jpg" }, "amount": 2200, "currency": "USD" }, { "sku": "002", "name": "Flur", "description": "Flur is a Queen. The cover photo is by Milada Vigerova from Unsplash.", "image": { "url": " "key": "flur.jpg" }, "amount": 2000, "currency": "USD" } ]
Here we have added information about two pet cats. You can add as many as you want. Each of the cats is a product for us to sell. It contains information like SKU(a unique identifier common for product inventory management), name, description, image, amount, and the currency.
Next, create a file called,
get-products.js inside the
functions folder with the following content,
const products = require('./data/products.json'); exports.handler = async () => { return { statusCode: 200, body: JSON.stringify(products), }; };
This is our first Netlify Serverless function. It is importing the products from the
products.json file and returning a JSON response. This function will be available as API and accessible using
/api/get-products.
Execute these commands from the root of the project to access this function,
netlify login
This will open a browser tab to help you create an account with Netlify and log in using the credentials.
netlify dev
To run netlify locally on port
8888 by default. Now the API will be accessible at Open a browser and try this URL.
The beauty of it is, the
gatsbyUI is also available on the
We will not access the user interface on the
8000port, and rather we will use the
8888port to access both the user interface and APIs.
Fetch products into the UI
Let us now fetch these products(cats) into the UI. Use this command from the root of the project folder to install a few dependencies first(you can use the npm install command as well),
yarn add axios dotenv react-feather
Now create a file called,
products.js inside
src/components with the following content,
import React, { useState, useEffect } from 'react'; import axios from "axios"; import { ShoppingCart } from 'react-feather'; import Image from './image'; import './products.css'; const Products = () => { const [products, setProducts] = useState([]); const [loaded, setLoaded] = useState(false); const [cart, setCart] = useState([]); useEffect(() => { axios("/api/get-products").then(result => { if (result.status !== 200) { console.error("Error loading shopnotes"); console.error(result); return; } setProducts(result.data); setLoaded(true); }); }, []); const addToCart = sku => { // Code to come here } const buyOne = sku => { // Code to come here } const checkOut = () => { // Code to come here } return ( <> <div className="cart" onClick={() => checkOut()}> <div className="cart-icon"> <ShoppingCart className="img" size={64} </div> <div className="cart-badge">{cart.length}</div> </div> { loaded ? ( <div className="products"> {products.map((product, index) => ( <div className="product" key={`${product.sku}-image`}> <Image fileName={product.image.key} style={{ width: '100%' }} alt={product.name} /> <h2>{product.name}</h2> <p className="description">{product.description}</p> <p className="price">Price: <b>${product.amount}</b></p> <button onClick={() => buyOne(product.sku)}>Buy Now</button> {' '} <button onClick={() => addToCart(product.sku)}>Add to Cart</button> </div> )) } </div> ) : ( <h2>Loading...</h2> ) } </> ) }; export default Products;
Note, we are using the
axios library to make an API call to fetch all the products. On fetching all the products, we loop through and add the information like image, description, amount, etc. Please note, we have kept three empty methods. We will add code for them a little later.
Add a file called
products.css inside the
src/components folder with the following content,
header { background: #ff8c00; padding: 1rem 2.5vw; font-size: 35px; } header a { color: white; font-weight: 800; text-decoration: none; } main { margin: 2rem 2rem 2rem 2rem; width: 90vw; } .products { display: grid; gap: 2rem; grid-template-columns: repeat(3, 1fr); margin-top: 3rem; } .product img { max-width: 100%; } .product button { background: #ff8c00; border: none; border-radius: 0.25rem; color: white; font-size: 1.25rem; font-weight: 800; line-height: 1.25rem; padding: 0.25rem; cursor: pointer; } .cart { position: absolute; display: block; width: 48px; height: 48px; top: 100px; right: 40px; cursor: pointer; } .cart-badge { position: absolute; top: -11px; right: -13px; background-color: #FF6600; color: #ffffff; font-size: 14px; font-weight: bold; padding: 5px 14px; border-radius: 19px; }
Now, replace the content of the file,
index.js with the following content,
import React from "react"; import Layout from "../components/layout"; import SEO from "../components/seo"; import Products from '../components/products'; const IndexPage = () => ( <Layout> <SEO title="Happy Paws" /> <h1>Hey there 👋</h1> <p>Welcome to the Happy Paws cat store. Get a Cat 🐈 and feel awesome.</p> <small> This is in test mode. That means you can check out using <a href=" target="_blank" rel="noreferrer">any of the test card numbers.</a> </small> <Products /> </Layout> ) export default IndexPage;
At this stage, start the netlify dev if it is not running already. Access the interface using You should see the page like this,
It seems we have some problems with the Cat images. However, all other details of each of the cat products seem to be fine. To fix that, add two cat images of your choice under the
src/images folder. The images' names should be the same as the image key mentioned in the
functions/data/products.json file. In our case, the names are
brownie.jpg and
flur.jpg.
Edit the
src/components/Image.js file and replace the content with the following,
import React from 'react' import { graphql, useStaticQuery } from 'gatsby' import Img from 'gatsby-image'; const Image = ({ fileName, alt, style }) => { const { allImageSharp } = useStaticQuery(graphql` query { allImageSharp { nodes { fluid(maxWidth: 1600) { originalName ...GatsbyImageSharpFluid_withWebp } } } } `) const fluid = allImageSharp.nodes.find(n => n.fluid.originalName === fileName) .fluid return ( <figure> <Img fluid={fluid} alt={alt} style={style} /> </figure> ) } export default Image;
Here we are using Gatsby’s sharp plugin to prebuilt the images. Now rerun the netlify dev command and access the user interface to see the correct images.
A few more things, open the
src/components/Header.js file and replace the content with this,
import { Link } from "gatsby" import PropTypes from "prop-types" import React from "react" const Header = ({ siteTitle }) => ( <header> <Link to="/"> {siteTitle} </Link> </header> ) Header.propTypes = { siteTitle: PropTypes.string, } Header.defaultProps = { siteTitle: ``, } export default Header
Now the header should look much better like,
But, we want to change that default header text to something meaningful. Open the file
gatsby-config.js and edit the
title and
description of the
siteMetaData object as
siteMetadata: { title: `Happy Paws - Cats love you!`, description: `Cat store is the one point solution for your Cat`, },
This will restart the Gatsby server. Once the server is up, you should see the header text changed to,
Next, let us do the required set up for the Netlify and Stripe integration.
Setup Stripe
Browse to the
functions folder and initialize a node project,
npm init -y
This will create a file called package.json. Install dependencies using the command,
yarn add stripe dotenv
This command will install stripe and
dotenv library, which is required to manage the environment variables locally.
Get your Stripe test credentials
- Log into Stripe at
- Make sure the “Viewing test data” switch is toggled on
- Click “Developers” in the left-hand menu
- Click “API keys”.
- Copy both the publishable key and secret key from the “Standard keys” panel
Create a file called
.env at the root of the project with the following content,
STRIPE_PUBLISHABLE_KEY= YOUR_STRIPE_PUBLISHABLE_KEY
STRIPE_SECRET_KEY= YOUR_STRIPE_SECRET_KEY
Note to replace the
YOUR_STRIPE_PUBLISHABLE_KEY and
YOUR_STRIPE_SECRET_KEY with the actual values got from the Stripe dashboard, respectively.
Create a Checkout Function
Next is to create a checkout function using netlify serverless and stripe. Create a file called
create-checkout.js with the following content under the
function folder.
require("dotenv").config(); const stripe = require('stripe')(process.env.STRIPE_SECRET_KEY); const inventory = require('./data/products.json'); const getSelectedProducts = skus => { let selected = []; skus.forEach(sku => { const found = inventory.find((p) => p.sku === sku); if (found) { selected.push(found); } }); return selected; } const getLineItems = products => { return products.map( obj => ({ name: obj.name, description: obj.description, images:[obj.image.url], amount: obj.amount, currency: obj.currency, quantity: 1 })); } exports.handler = async (event) => { const { skus } = JSON.parse(event.body); const products = getSelectedProducts(skus); const validatedQuantity = 1; const lineItems = getLineItems(products); console.log(products); console.log(lineItems); const session = await stripe.checkout.sessions.create({ payment_method_types: ['card'], billing_address_collection: 'auto', shipping_address_collection: { allowed_countries: ['US', 'CA', 'IN'], }, success_url: `${process.env.URL}/success`, cancel_url: process.env.URL, line_items: lineItems, }); return { statusCode: 200, body: JSON.stringify({ sessionId: session.id, publishableKey: process.env.STRIPE_PUBLISHABLE_KEY, }), }; };
Note here we are expecting a payload with the selected product's SKU information. Upon getting that, we will take out other relevant information of the selected products from the inventory, i.e.,
products.json file. Next, we create the line item object and pass it to the stripe API for creating a Stripe session. We also specify to delegate to a page called
success.html once the payment is successful.
UI Changes for Checkout
The last thing we need to do now is to call the new serverless function from the UI. First, we need to install the stripe library for clients. Execute this command from the root of the project folder,
yarn add @stripe/stripe-js
Create a folder called utils under the
src folder. Create a file named
stripejs.js under
src/utils with the following content,
import { loadStripe } from '@stripe/stripe-js'; let stripePromise; const getStripe = (publishKey) => { if (!stripePromise) { stripePromise = loadStripe(publishKey); } return stripePromise; } export default getStripe;
This is to get the stripe instance globally at the client-side using a singleton method. Now open the
products.js file under
src/components to make the following changes,
Import the getStripe function from ‘utils/stripejs’,
Time to add code for the functions
addToCart,
byuOne, and
const addToCart = sku => { setCart([...cart, sku]); } const buyOne = sku => { const skus = []; skus.push(sku); const payload = { skus: skus }; performPurchase(payload); } const checkOut = () => { console.log('Checking out...'); const payload = { skus: cart }; performPurchase(payload); console.log('Check out has been done!'); }
Last, add the function
performPurchase, which will actually make the API call when the Buy Now or Checkout buttons are clicked.
const performPurchase = async payload => { const response = await axios.post('/api/create-checkout', payload); console.log('response', response); const stripe = await getStripe(response.data.publishableKey); const { error } = await stripe.redirectToCheckout({ sessionId: response.data.sessionId, }); if (error) { console.error(error); } }
Now restart netlify dev and open the app in the browser,
You can start the purchase by clicking on the Buy Now button or add the products to the cart and click on the cart icon at the top right of the page. Now the stripe session will start, and the payment page will show up,
Provide the details and click on the Pay button. Please note, you can get the test card information from here. The payment should be successful, and you are supposed to land on a success page as we have configured previously. But we have not created a success page yet. Let’s create one.
Create a file called
success.js under the
src/pages folder with the following content,
import React from 'react'; import Layout from "../components/layout" import SEO from "../components/seo" const Success = () => { return ( <Layout> <SEO title="Cat Store - Success" /> <h1>Yo, Thank You!</h1> <img src=" alt="dancing cat"/> </Layout> ) } export default Success;
Complete the payment to see this success page in action after a successful payment,
Great, we have the Jamstack pet store app running using the Netlify serverless functions, Stripe Payment API, and Gatsby framework. But it is running locally. Let us deploy it using Netlify Hosting to access it publicly.
Deploy and Host on Netlify CDN
First, commit and push all the code to your GitHub repository. Login to your netlify account from the browser and click on the ‘New site from Git’ button. Select the option GitHub from the next page,
Search and select your GitHub repository to deploy and host,
Finally, provide the build options as shown below and click on the ‘Deploy Site’ button.
That’s all, and you should have the site live with the app.
Congratulations 🎉 !!! You have successfully built a Jamstack pet shop application with Netlify Serverless functions, Stripe APIs, Gatsby framework, and deployed it on Netlify CDN.
Before we end...
Thank you for reading this far! Let’s connect. You can @ me on Twitter (@tapasadhikary) with comments, or feel free to follow. Please like/share this article so that it reaches others as well.
Do not forget to check out my previous articles on
Jamstack,
- JAMstack for All: An Introduction
- JAMstack vs traditional monolithic workflow
- What is a Static Site Generator and how to select one?
- Hey, I have made a demo lab using JAMstack
Discussion (4)
How can we make the data in data/products.json dynamic? Like for example coming from a WordPress CMS?
Hi Jejo,
Thanks for asking. The Gatsby plug-in doesn't have the subscription and pull mechanism built yet. As it is an open-source, I am hoping it get built sooner. Until then, we need to use fetch or something alike to get the dynamic data.
In case, you are interseted to look into the plugin, here it is
github.com/atapas/gatsby-source-ha...
Thank you! I just got another problem. It's affecting my score in PageSpeed Insights. Any idea on how I can make this go away? dev-to-uploads.s3.amazonaws.com/up... | https://dev.to/atapas/how-to-create-a-jamstack-pet-store-app-using-stripe-gatsbyjs-and-netlify-functions-1jfh | CC-MAIN-2022-21 | refinedweb | 2,749 | 59.19 |
Python vs Java
Hey :) I'm a learner of C/C++ and LOLCODE; I'm nearly finished with both, so I'm thinking of a 'second' (rather, third) language. I'm getting confused of which to learn between Java and Python. People around me say that Java's more useful, while Python is easier to learn. What's your opinion? And why? Leave it below:)
🤔 Hmm. To be honest, you could go with either. If you're in academia (and they use Java) I would probably go with Java just to make your computer science classes easier. If you don't think you're going to be taking any Java classes in the near future I would go with Python. I'm not sure how much you're learned in terms of theory when doing C/C++, but Java is a perfect language if you want to learn about that kind of stuff.
Have you considered learning JavaScript? :) It's a wonderful language!
@Pythonier Personally I find that JS and Java are less similar than C++ and Java, so since he already knows C++ he's probably good (He might want to learn JS instead partially because Java and C++ are so similar and thus are often interchangeable for pretty much any given task)
@AndrewBowers1 thank u for ur reply! I am considering many different options at this point!
I for one know both languages decently well, and say this the people telling you Java is more useful are 100% wrong, not only does python have more application uses it is very common in Academia while java is being moved out of academic areas, and as well Python is one of the fastest growing needs in the Software Engineering industry, with great payments. Python is in my opinion one of the best languages for any purpose you may have for it, I use it for game programming, web programming, cybersecurity, and other tasks such as creating AI / messing about with machine learning. The only times I use Java out of class is never. So if you want to learn/master something practical and can very likely be what you will be working with 5-10 years in the future choose python, but if you are looking to learn more languages learning Java will also be beneficial as it shares many of the same concepts as other languages, while python is more unique. I would suggest if your only options are as listed, is to choose Python as your next language to learn, because of the practical uses the language has.
Python!
I recommend Python (3.7 - not 2.7) because of its use in data science. First, python has a simple syntax.
The Zen of Python:
Open a
bash terminal
type
python3
type
import this
read!
Games
Python is a great language to write anything from simple text adventure games to complex ones - all using data collection. In its simplicity, it is the perfect second language (but not a good first language). I recommend learning python from an expert rather than from an online program such as CodeCademy. However, if there is no other way, use CodeCademy.
Alternatives
HTML, CSS, and JavaScript (NOT JAVA) are always a good trio of second or third languages. They are empowering, allowing you to make websites, style them, and power them. These languages are good to learn from CodeCademy, but also good coming from experts if possible.
It's always good to know as many languages as possible, in any order possible (except with python first)
Okay so, I recommend Java from your current experience.
It's "object-oriented", which means you need more code to do the same thing, but it's normally more expressive. At least, that's what object-oriented is to me. (though that's with stateful programming,)
Though, if you want to die, try learning Lisp
Java, especially if you've already learned C++, they're very similar and the transition is easy. Python's sole purpose is to be simple, it's not very good at anything else.
Side note: Idk what you mean by "finished" with the languages, there's always more to learn. I'd honestly recommend getting really into a language rather than knowing several with only baseline knowledge of each.
EDIT: Python is useful sometimes, when you just need to do something simple and performance isn't an issue. It really depends on what you want to be doing.
@AquaMarine0421 That's what I'd strongly recommend. With C/C++, knowledge of the language grows exponentially as long as you're learning. What i mean by that is, once you've learned the basics, everything else comes much easier and only gets more easy the more you learn. If you can't find a good course for learning the language, that's fine, what I ended up doing after i learned the fundamentals was I started working on more and more complex projects which required me to implement more and more advanced concepts to work properly. When I found something I didn't know, I'd think of a potential solution to a problem, and a quick Google search would usually yield a solution, and reference documentation to learn from. If it doesn't, you can always ask here (or if you're so inclined, ask me personally, my email is in my profile and I like helping)
Java would be better since it is supported by a wider spread of systems. Plus, you can use Java to code games, and not just text adventures (*ahem*, python). Although, Python is easier to program, the language is not as powerful or expansive as Java.
Please mark this as answered if I helped your problem 👍
I know both Python and Java. I think that Python is easier to read, and you have to type less. Java is longer, but I have created many more non-console games in Java than I have in Python. You could always use Pygame, but I have to learn more about Pygame.
Python. It would help me in my argument against LB (he's my friend). He thinks Java's better (it's not).
Really, you are correct about the game thing, without turtle.
Not advertising, it was from someone else. Python gets hard to program when the problem is complicated, but if you're a beginner at coding python would be recommended. But you already know C++/C, so Java is better, as it's more usable.
Unrelated: You really like Scratch, don't you? You're name description made something in my mind click.
Why worry about choosing python and java? Have you ever considered Jython?
Jython combines Java and Python to make a useful, easy to learn language. There is also Cython, which is C and Python Combined. It gives more support to other libraries, and you can use python for low-level programming.
@HarperframeInc Jython is just a Python interpreter with Java bindings, not a separate language.
@SPQR, @eankeen, @Vandesm14: I think we all responded at the exact same time!
@Pythonier i think we did lol
@SPQR Yep! After posting my comment, I refreshed it and was surprised at how many people commented that fast!
@Vandesm14 Although I think mine was just a minute before yours and @SPQR
@Pythonier | https://replit.com/talk/ask/Python-vs-Java/14382 | CC-MAIN-2021-43 | refinedweb | 1,222 | 71.24 |
You want to process multiple requests in parallel, but you don necessarily want to run all the requests simultaneously. Using a technique like that in Recipe 20.6 can create a huge number of threads running at once, slowing down the average response time. You want to set a limit on the number of simultaneously running threads.
You want a thread pool. If you e writing an Internet server and you want to service requests in parallel, you should build your code on top of the gserver module, as seen in Recipe 14.14: it has a thread pool and many TCP/IP-specific features. Otherwise, heres a generic THReadPool class, based on code from gserver.
The instance variable @pool contains the active threads. The Mutex and the ConditionVariable are used to control the addition of threads to the pool, so that the pool never contains more than @max_size tHReads:
require hread class ThreadPool def initialize(max_size) @pool = [] @max_size = max_size @pool_mutex = Mutex.new @pool_cv = ConditionVariable.new end
When a thread wants to enter the pool, but the pool is full, the thread puts itself to sleep by calling ConditionVariable#wait. When a thread in the pool finishes executing, it removes itself from the pool and calls ConditionVariable#signal to wake up the first sleeping thread:
def dispatch(*args) Thread.new do # Wait for space in the pool. @pool_mutex.synchronize do while @pool.size >= @max_size print "Pool is full; waiting to run #{args.join(,)}… " if $DEBUG # Sleep until some other thread calls @pool_cv.signal. @pool_cv.wait(@pool_mutex) end end
The newly-awakened thread adds itself to the pool, runs its code, and then calls ConditionVariable#signal to wake up the next sleeping thread:
@pool << Thread.current begin yield(*args) rescue => e exception(self, e, *args) ensure @pool_mutex.synchronize do # Remove the thread from the pool. @pool.delete(Thread.current) # Signal the next waiting thread that theres a space in the pool. @pool_cv.signal end end end end def shutdown @pool_mutex.synchronize { @pool_cv.wait(@pool_mutex) until @pool.empty? } end def exception(thread, exception, *original_args) # Subclass this method to handle an exception within a thread. puts "Exception in thread #{thread}: #{exception}" end end
Heres a simulation of five incoming jobs that take different times to run. The pool ensures no more than three jobs run at a time. The job code doesn need to know anything about threads or thread pools; thats all handled by THReadPool#dispatch.
$DEBUG = true pool = ThreadPool.new(3) 1.upto(5) do |i| pool.dispatch(i) do |i| print "Job #{i} started. " sleep(5-i) print "Job #{i} complete. " end end # Job 1 started. # Job 3 started. # Job 2 started. # Pool is full; waiting to run 4… # Pool is full; waiting to run 5… # Job 3 complete. # Job 4 started. # Job 2 complete. # Job 5 started. # Job 5 complete. # Job 4 complete. # Job 1 complete. pool.shutdown
When should you use a thread pool, and when should you just send a swarm of threads after the problem? Consider why this pattern is so common in Internet servers that its built into Rubys gserver library. Internet server requests are usually I/O bound, because most servers operate on the filesystem or a database. If you run high latency requests in parallel (like requests for filesystem files), you can complete multiple requests in about the same time it would take to complete a single request.
But Internet server requests can use a lot of memory, and any random user on the Internet can trigger a job on your server. If you create and start a thread for every incoming request, its easy to run out of resources. You need to find a tradeoff between the performance benefit of multithreading and the performance hazard of thrashing due to insufficient resources. The simplest way to do this is to limit the number of requests that can be processed at a given time.
A thread pool isn a connection pool, like you might see with a database. Database connections are often pooled because they e expensive to create. Threads are pretty cheap; we just don want a lot of them actively running at once. The example in the Solution creates five threads at once, but only three of them can be active at any one time. The rest are asleep, waiting for a notification from the condition variable pool_cv.
Calling ThreadPool#dispatch with a code block creates a new thread that runs the code block, but not until it finds a free slot in the thread pool. Until then, its waiting on the condition variable @pool_cv. When one of the threads in the pool completes its code block, it calls signal on the condition variable, waking up the first thread currently waiting on it.
The shutdown method makes sure all the jobs complete by repeatedly waiting on the condition variable until no other threads want access to the pool. | https://flylib.com/books/en/2.44.1/limiting_multithreading_with_a_thread_pool.html | CC-MAIN-2020-45 | refinedweb | 816 | 66.44 |
sqlite3 — DB-API 2.0 interface for SQLite databases
Source code: Lib.
To use the module, you must first create a
Connection object that represents the database. Here the data will be stored in the
example.db file:
import sqlite3 con = sqlite3.connect('example.db')
You can also supply the special name
:memory: to create a database in RAM.
Once you have a
Connection, you can create a
Cursor object and call its
execute() method to perform SQL commands:
cur = con.cursor() # Create table cur.execute('''CREATE TABLE stocks (date text, trans text, symbol text, qty real, price real)''') # Insert a row of data cur.execute("INSERT INTO stocks VALUES ('2006-01-05','BUY','RHAT',100,35.14)") # Save (commit) the changes con.commit() # We can also close the connection if we are done with it. # Just be sure any changes have been committed or they will be lost. con.close()
The data you’ve saved is persistent and is available in subsequent sessions:
import sqlite3 con = sqlite3.connect('example.db') cur = con.cursor() cur.execute('SELECT * FROM stocks ORDER BY price'): print(row) ('2006-01-05', 'BUY', 'RHAT', 100, 35.14) ('2006-03-28', 'BUY', 'IBM', 1000, 45.0) ('2006-04-06', 'SELL', 'IBM', 500, 53.0) ('2006-04-05', 'BUY', 'MSFT', 1000, 72.0)
Usually your SQL operations will need to use values from Python variables. You shouldn’t assemble your query using Python’s string operations because doing so is insecure; it makes your program vulnerable to an SQL injection attack (see the xkcd webcomic for a humorous example of what can go wrong):
# Never do this -- insecure! symbol = 'RHAT' cur.execute("SELECT * FROM stocks WHERE symbol = '%s'" % symbol)
Instead, use the DB-API’s parameter substitution. Put a placeholder wherever you want to use a value, and then provide a tuple of values as the second argument to the cursor’s
execute() method. An SQL statement may use one of two kinds of placeholders: question marks (qmark style) or named placeholders (named style). For the qmark style,
parameters must be a sequence. For the named style, it can be either a sequence or
dict instance. The length of the sequence must match the number of placeholders, or a
ProgrammingError is raised. If a
dict is given, it must contain keys for all named parameters. Any extra items are ignored. Here’s an example of both styles:
import sqlite3 con = sqlite3.connect(":memory:") cur = con.cursor() cur.execute("create table lang (lang_name, lang_age)") # This is the qmark style: cur.execute("insert into lang values (?, ?)", ("C", 49)) # The qmark style used with executemany(): lang_list = [ ("Fortran", 64), ("Python", 30), ("Go", 11), ] cur.executemany("insert into lang values (?, ?)", lang_list) # And this is the named style: cur.execute("select * from lang where lang_name=:name and lang_age=:age", {"name": "C", "age": 49}) print(cur.fetchall()) con.close()
See also
-
The SQLite web page; the documentation describes the syntax and the available data types for the supported SQL dialect.
-
Tutorial, reference and examples for learning SQL syntax.
- PEP 249 - Database API Specification 2.0
PEP written by Marc-André Lemburg.
Module functions and constants
sqlite3.version
The version number of this module, as a string. This is not the version of the SQLite library.
sqlite3.version_info
The version number of this module, as a tuple of integers. This is not the version of the SQLite library.
sqlite3.sqlite_version
The version number of the run-time SQLite library, as a string.
sqlite3.sqlite_version_info
The version number of the run-time SQLite library, as a tuple of integers.
sqlite3.PARSE_DECLTYPES
This constant is meant to be used with the detect_types parameter of the
connect()function.
Setting it makes the
sqlite3module.
sqlite3.PARSE_COLNAMES
This constant is meant to be used with the detect_types parameter of the
connect()function.. The column name found in
Cursor.descriptiondoes not include the type, i. e. if you use something like
'as "Expiration date [datetime]"'in your SQL, then we will parse out everything until the first
'['for the column name and strip the preceeding space: the column name would simply be “Expiration date”.
sqlite3.connect(database[, timeout, detect_types, isolation_level, check_same_thread, factory, cached_statements, uri])
Opens a connection to the SQLite database file database. By default returns a
Connectionobject, unless a custom factory is given.
database is a path-like object giving the pathname (absolute or relative to the current working directory) of the database file to be opened. You can use
"property of
Connectionobjects.
SQLite natively supports only the types TEXT, INTEGER, REAL, BLOB and NULL. If you want to use other types you mustand
PARSE_COLNAMESto turn type detection on. Due to SQLite behaviour, types can’t be detected for generated fields (for example
max(data)), even when detect_types parameter is set. In such case, the returned type is
str.
By default, check_same_thread is
Trueand only the creating thread may use the connection. If set
False, the returned connection may be shared across multiple threads. When using multiple threads with the same connection writing operations should be serialized by the user to avoid data corruption..
If uri is true, database is interpreted as a URI. This allows you to specify options. For example, to open a database in read-only mode you can use:Python
db = sqlite3.connect('file:path/to/database?mode=ro', uri=True)
More information about this feature, including a list of recognized options, can be found in the SQLite URI documentation.
Raises an auditing event
sqlite3.connectwith argument
database.
Changed in version 3.4: Added the uri parameter.
Changed in version 3.7: database can now also be a path-like object, not only a string.
sqlite3.register_converter(typename, callable) typename and the name of the type in your query are matched in case-insensitive manner.
sqlite3.register_adapter(type, callable)
Registers a callable to convert the custom Python type type into one of SQLite’s supported types. The callable callable accepts as single parameter the Python value, and must return a value of the following types: int, float, str or bytes.
sqlite3.complete_statement(sql)
Returns
Trueif:Python
# A minimal SQLite shell for experiments import sqlite3 con = sqlite3.connect(":memory:") con.isolation_level = None cur = con.cursor() buffer = "" print("Enter your SQL commands to execute in sqlite3.") print("Enter a blank line to exit.") while True: line =()
sqlite3.enable_callback_tracebacks(flag)
Falseto disable the feature again.
Connection Objects
class sqlite3.Connection
A SQLite database connection has the following attributes and methods:
isolation_level
Get or set the current default isolation level.
Nonefor autocommit mode or one of “DEFERRED”, “IMMEDIATE” or “EXCLUSIVE”. See section Controlling Transactions for a more detailed explanation.
in_transaction
Trueif a transaction is active (there are uncommitted changes),
Falseotherwise. Read-only attribute.
New in version 3.2.
cursor(factory=Cursor)
The cursor method accepts a single optional parameter factory. If supplied, this must be a callable returning an instance of
Cursoror its subclasses.
commit() closes the database connection. Note that this does not automatically call
commit(). If you just close your database connection without calling
commit()first, your changes will be lost!
execute(sql[, parameters])
This is a nonstandard shortcut that creates a cursor object by calling the
cursor()method, calls the cursor’s
execute()method with the parameters given, and returns the cursor.
executemany(sql[, parameters])
This is a nonstandard shortcut that creates a cursor object by calling the
cursor()method, calls the cursor’s
executemany()method with the parameters given, and returns the cursor.
executescript(sql_script)
This is a nonstandard shortcut that creates a cursor object by calling the
cursor()method, calls the cursor’s
executescript()method with the given sql_script, and returns the cursor.
create_function(name, num_params, func, *, deterministic=False)
Creates a user-defined function that you can later use from within SQL statements under the function name name. num_params is the number of parameters the function accepts (if num_params is -1, the function may take any number of arguments), and func is a Python callable that is called as the SQL function. If deterministic is true, the created function is marked as deterministic, which allows SQLite to perform additional optimizations. This flag is supported by SQLite 3.8.3 or higher,
NotSupportedErrorwill be raised if used with older versions.
The function can return any of the types supported by SQLite: bytes, str, int, float and
None.
Changed in version 3.8: The deterministic parameter was added.
Example:Python
import sqlite3 import hashlib def md5sum(t): return hashlib.md5(t).hexdigest() con = sqlite3.connect(":memory:") con.create_function("md5", 1, md5sum) cur = con.cursor() cur.execute("select md5(?)", (b"foo",)) print(cur.fetchone()[0]) con.close()
create_aggregate(name, num_params, aggregate_class)
Creates a user-defined aggregate function.
The aggregate class must implement a
stepmethod, which accepts the number of parameters num_params (if num_params is -1, the function may take any number of arguments), and a
finalizemethod which will return the final result of the aggregate.
The
finalizemethod can return any of the types supported by SQLite: bytes, str, int, float and
None.
Example:Python
import sqlite3 class MySum: def __init__(self): self.count = 0 def step(self, value): self.count += value def finalize(self): return self.count con = sqlite3.connect(":memory:") con.create_aggregate("mysum", 1, MySum) cur = con.cursor() cur.execute("create table test(i)") cur.execute("insert into test(i) values (1)") cur.execute("insert into test(i) values (2)") cur.execute("select mysum(i) from test") print(cur.fetchone()[0]) con.close()
create_collation(name, callable)
Creates a collation with the specified name and callable. The callable will be passed two string arguments. following example shows a custom collation that sorts “the wrong way”:Python
import sqlite3 def collate_reverse(string1, string2): if string1 == string2: return 0 elif string1 < string2: return 1 else: return -1 con = sqlite3.connect(":memory:") con.create_collation("reverse", collate_reverse) cur = con.cursor() cur.execute("create table test(x)") cur.executemany("insert into test(x) values (?)", [("a",), ("b",)]) cur.execute("select x from test order by x collate reverse") for row in cur: print(row) con.close()
To remove a collation, call
create_collationwith
Noneas callable:Python
con.create_collation("reverse", None)
interrupt()
You can call this method from a different thread to abort any queries that might be executing on the connection. The query will then abort and the caller will get an exception.
This routine registers a callback. The callback is invoked for each attempt to access a column of a table in the database. The callback should return
SQLITE_OK.
set_progress_handler(handler, n)for handler.
Returning a non-zero value from the handler function will terminate the currently executing query and cause it to raise an
OperationalErrorexception.
set_trace_callback(trace_callback)
Registers trace_callback to be called for each SQL statement that is actually executed by the SQLite backend.
The only argument passed to the callback is the statement (as string) that is being executed. The return value of the callback is ignored. Note that the backend does not only run statements passed to the
Cursor.execute()methods. Other sources include the transaction management of the Python module and the execution of triggers defined in the current database.
Passing
Noneas trace_callback will disable the trace callback.
New in version 3.3.
enable_load_extension(enabled) 3.2.Python loading) con.close()
load_extension(path)
This routine loads a SQLite extension from a shared library. You have to enable extension loading with
enable_load_extension()before you can use this routine.
Loadable extensions are disabled by default. See 1.
New in version 3.2.
row_factory.
Example:Python
import sqlite3 def dict_factory(cursor, row): d = {} for idx, col in enumerate(cursor.description): d[col[0]] = row[idx] return d con = sqlite3.connect(":memory:") con.row_factory = dict_factory cur = con.cursor() cur.execute("select 1 as a") print(cur.fetchone()["a"]) con.close()
If returning a tuple doesn’t suffice and you want name-based access to columns, you should consider setting
row_factoryto the highly-optimized
sqlite3.Rowtype.
Rowprovides both index-based and case-insensitive name-based access to columns with almost no memory overhead. It will probably be better than your own custom dictionary-based approach or even a db_row based solution.
text_factory
Using this attribute you can control what objects are returned for the
TEXTdata type. By default, this attribute is set to
strand the
sqlite3module will return Unicode objects for
TEXT. If you want to return bytestrings instead, you can set it to
bytes.
You can also set it to any other callable that accepts a single bytestring parameter and returns the resulting object.
See the following example code for illustration:Python
import sqlite3 con = sqlite3.connect(":memory:") cur = con.cursor() AUSTRIA = "\xd6sterreich" # by default, rows are returned as Unicode cur.execute("select ?", (AUSTRIA,)) row = cur.fetchone() assert row[0] == AUSTRIA # but we can make sqlite3 always return bytestrings ... con.text_factory = bytes cur.execute("select ?", (AUSTRIA,)) row = cur.fetchone() assert type(row[0]) is bytes # the bytestrings will be encoded in UTF-8, unless you stored garbage in the # database ... assert row[0] == AUSTRIA.encode("utf-8") # we can also implement a custom text_factory ... # here we implement one that appends "foo" to all strings con.text_factory = lambda x: x.decode("utf-8") + "foo" cur.execute("select ?", ("bar",)) row = cur.fetchone() assert row[0] == "barfoo" con.close()
total_changes
Returns the total number of database rows that have been modified, inserted, or deleted since the database connection was opened.
iterdump()
Returns an iterator to dump the database in an SQL text format. Useful when saving an in-memory database for later restoration. This function provides the same capabilities as the .dump command in the sqlite3 shell.
Example:Python
# Convert file existing_db.db to SQL dump file dump.sql import sqlite3 con = sqlite3.connect('existing_db.db') with open('dump.sql', 'w') as f: for line in con.iterdump(): f.write('%s\n' % line) con.close()
backup(target, *, pages=-1, progress=None, name="main", sleep=0.250)
This method makes a backup of a SQLite database even while it’s being accessed by other clients, or concurrently by the same connection. The copy will be written into the mandatory argument target, that must be another
Connectioninstance.
By default, or when pages is either
0or a negative integer, the entire database is copied in a single step; otherwise the method performs a loop copying up to pages pages at a time.
If progress is specified, it must either be
Noneor a callable object that will be executed at each iteration with three integer arguments, respectively the status of the last iteration, the remaining number of pages still to be copied and the total number of pages.
The name argument specifies the database name that will be copied: it must be a string containing either
"main", the default, to indicate the main database,
"temp"to indicate the temporary database or the name specified after the
ASkeyword in an
ATTACH DATABASEstatement for an attached database.
The sleep argument specifies the number of seconds to sleep by between successive attempts to backup remaining pages, can be specified either as an integer or a floating point value.
Example 1, copy an existing database into another:Python
import sqlite3 def progress(status, remaining, total): print(f'Copied {total-remaining} of {total} pages...') con = sqlite3.connect('existing_db.db') bck = sqlite3.connect('backup.db') with bck: con.backup(bck, pages=1, progress=progress) bck.close() con.close()
Example 2, copy an existing database into a transient copy:Python
import sqlite3 source = sqlite3.connect('existing_db.db') dest = sqlite3.connect(':memory:') source.backup(dest)
Availability: SQLite 3.6.11 or higher
New in version 3.7.
Cursor Objects
class sqlite3.Cursor
A
Cursorinstance has the following attributes and methods.
execute(sql[, parameters])
Executes an SQL statement. Values may be bound to the statement using placeholders.
execute()will only execute a single SQL statement. If you try to execute more than one statement with it, it will raise a
Warning. Use
executescript()if you want to execute multiple SQL statements with one call.
executemany(sql, seq_of_parameters)
Executes a parameterized SQL command against all parameter sequences or mappings found in the sequence seq_of_parameters. The
sqlite3module also allows using an iterator yielding parameters instead of a sequence.Python()) con.close()
Here’s a shorter example using a generator:Python
import sqlite3 import string def char_generator(): for c in string.ascii_lowercase: yield (c,) con = sqlite3.connect(":memory:") cur = con.cursor() cur.execute("create table characters(c)") cur.executemany("insert into characters(c) values (?)", char_generator()) cur.execute("select c from characters") print(cur.fetchall()) con.close()
executescript(sql_script)
This is a nonstandard convenience method for executing multiple SQL statements at once. It issues a
COMMITstatement first, then executes the SQL script it gets as a parameter.
sql_script can be an instance of
str.
Example:Python
import sqlite3 con = sqlite3.connect(":memory:") cur = con.cursor() cur.executescript(""" create table person( firstname, lastname, age ); create table book( title, author, published ); insert into book(title, author, published) values ( 'Dirk Gently''s Holistic Detective Agency', 'Douglas Adams', 1987 ); """) con.close()
fetchone()
Fetches the next row of a query result set, returning a single sequence, or
Nonewhen no more data is available.
fetchmany(size=cursor.arraysize)
Fetches the next set of rows of a query result, returning a list. An empty list is returned when no more rows are available.
The number of rows to fetch per call is specified by the size()
Fetches all (remaining) rows of a query result, returning a list. Note that the cursor’s arraysize attribute can affect the performance of this operation. An empty list is returned when no rows are available.
Close the cursor now (rather than whenever
__del__is called).
The cursor will be unusable from this point forward; a
ProgrammingErrorexception will be raised if any operation is attempted with the cursor.
rowcount
Although the
Cursorclass of the
sqlite3module implements this attribute, the database engine’s own support for the determination of “rows affected”/”rows selected” is quirky..
With SQLite versions before 3.6.5,
rowcountis set to 0 if you make a
DELETE FROM tablewithout any condition.
lastrowid
This read-only attribute provides the rowid of the last modified row. It is only set if you issued an
INSERTor a
REPLACEstatement using the
execute()method. For operations other than
INSERTor
REPLACEor when
executemany()is called,
lastrowidis set to
None.
If the
INSERTor
REPLACEstatement failed to insert the previous successful rowid is returned.
Changed in version 3.6: Added support for the
REPLACEstatement.
arraysize
Read/write attribute that controls the number of rows returned by
fetchmany(). The default value is 1 which means a single row would be fetched per call.
description
This read-only attribute provides the column names of the last query. To remain compatible with the Python DB API, it returns a 7-tuple for each column where the last six items of each tuple are
None.
It is set for
SELECTstatements without any matching rows as well.
connection
This read-only attribute provides the SQLite database
Connectionused by the
Cursorobject. A
Cursorobject created by calling
con.cursor()will have a
connectionattribute that refers to con:Python
>>> con = sqlite3.connect(":memory:") >>> cur = con.cursor() >>> cur.connection == con True
Row Objects
class sqlite3.Row.
keys()
This method returns a list of column names. Immediately after a query, it is the first member of each tuple in
Cursor.description.
Changed in version 3.5: Added support of slicing.
Let’s assume we initialize a table as in the example given above:
con = sqlite3.connect(":memory:") cur = con.cursor() cur.execute('''create table stocks (date text, trans text, symbol text, qty real, price real)''') cur.execute("""insert into stocks values ('2006-01-05','BUY','RHAT',100,35.14)""") con.commit() cur.close()
>>> con.row_factory = sqlite3.Row >>> cur = con.cursor() >>> cur.execute('select * from stocks') <sqlite3.Cursor object at 0x7f4e7dd8fa80> >>> r = cur.fetchone() >>> type(r) <class 'sqlite3.Row'> >>> tuple(r) ('2006-01-05', 'BUY', 'RHAT', 100.0, 35.14) >>> len(r) 5 >>> r[2] 'RHAT' >>> r.keys() ['date', 'trans', 'symbol', 'qty', 'price'] >>> r['qty'] 100.0 >>> for member in r: ... print(member) ... 2006-01-05 BUY RHAT 100.0 35.14
Exceptions
exception sqlite3.Warning
-
exception sqlite3.Error
The base class of the other exceptions in this module. It is a subclass of
Exception.
exception sqlite3.DatabaseError
Exception raised for errors that are related to the database.
exception sqlite3.IntegrityError
Exception raised when the relational integrity of the database is affected, e.g. a foreign key check fails. It is a subclass of
DatabaseError.
exception sqlite3.ProgrammingError
Exception raised for programming errors, e.g. table not found or already exists, syntax error in the SQL statement, wrong number of parameters specified, etc. It is a subclass of
DatabaseError.
exception sqlite3.OperationalError
Exception raised for errors that are related to the database’s operation and not necessarily under the control of the programmer, e.g. an unexpected disconnect occurs, the data source name is not found, a transaction could not be processed, etc. It is a subclass of
DatabaseError.
exception sqlite3.NotSupportedError
Exception raised in case a method or database API was used which is not supported by the database, e.g. calling the
rollback()method on a connection that does not support transaction or has transactions turned off. It is a subclass of
DatabaseError.
SQLite and Python types
Introduction.
Using adapters to store additional Python types in SQLite databases
As described before, SQLite supports only a limited set of types natively. To use other Python types with SQLite, you must adapt them to one of the sqlite3 module’s supported types for SQLite: one of NoneType, int, float, str, bytes.
There are two ways to enable the
sqlite3 module to adapt a custom Python type to one of the supported ones.
Letting your object adapt itself
This is a good approach if you write the class yourself. Let’s suppose you have a class like this:
class Point: def __init__(self, x, y): self.x, self.y = x, y
Now you want to store the point in a single SQLite column. First you’ll have to choose one of the supported types]) con.close()
Registering an adapter callable
The other possibility is to create a function that converts the type to the string representation and register the function with
register_adapter().
import sqlite3 class Point:]) con.close() import time def adapt_datetime(ts): return time.mktime(ts.timetuple()) sqlite3.register_adapter(datetime.datetime, adapt_datetime) con = sqlite3.connect(":memory:") cur = con.cursor() now = datetime.datetime.now() cur.execute("select ?", (now,)) print(cur.fetchone()[0]) con.close()
Converting SQLite values to custom Python types
bytes object, no matter under which data type you sent the value to SQLite.
def convert_point(s): x, y = map(float, s.split(b";")) return Point(x, y)
Now you need to make the
sqlite3 module know that what you select from the database is actually a point. There are two ways of doing this:
- Implicitly via the declared type
- Explicitly via the column name
Both ways are described in section Module functions and constants, in the entries for the constants
PARSE_DECLTYPES and
PARSE_COLNAMES.
The following example illustrates both approaches.
import sqlite3 class Point: def __init__(self, x, y): self.x, self.y = x, y def __repr__(self): return "(%f;%f)" % (self.x, self.y) def adapt_point(point): return ("%f;%f" % (point.x, point.y)).encode('ascii') def convert_point(s): x, y = list(map(float, s.split(b";")))()
Default adapters and converters])) con.close()
If a timestamp stored in SQLite has a fractional part longer than 6 numbers, its value will be truncated to microsecond precision by the timestamp converter.
Controlling Transactions
The underlying
sqlite3 library operates in
autocommit mode by default, but the Python
sqlite3 module by default does not.
autocommit mode means that statements that modify the database take effect immediately. A
BEGIN or
SAVEPOINT statement disables
autocommit mode, and a
COMMIT, a
ROLLBACK, or a
RELEASE that ends the outermost transaction, turns
autocommit mode back on.
The Python
sqlite3 module by default issues a
BEGIN statement implicitly before a Data Modification Language (DML) statement (i.e.
INSERT/
UPDATE/
DELETE/
REPLACE).
You can control which kind of
BEGIN statements
sqlite3 implicitly executes via the isolation_level parameter to the
connect() call, or via the
isolation_level property of connections. If you specify no isolation_level, a plain
BEGIN is used, which is equivalent to specifying
DEFERRED. Other possible values are
IMMEDIATE and
EXCLUSIVE.
You can disable the
sqlite3 module’s implicit transaction management by setting
isolation_level to
None. This will leave the underlying
sqlite3 library operating in
autocommit mode. You can then completely control the transaction state by explicitly issuing
BEGIN,
ROLLBACK,
SAVEPOINT, and
RELEASE statements in your code.
Changed in version 3.6:
sqlite3 used to implicitly commit an open transaction before DDL statements. This is no longer the case.
Using sqlite3 efficiently
Using shortcut methods") # close is not a shortcut method and it's not called automatically, # so the connection object should be closed manually con.close()
Accessing columns by name instead of by index"] con.close()
Using the connection as a context manager
Connection objects can be used as context managers that automatically commit or rollback transactions. In the event of an exception, the transaction is rolled back; otherwise, the transaction is committed:
import sqlite3 con = sqlite3.connect(":memory:") con.execute("create table person (id integer primary key, firstname varchar unique)") # Successful, con.commit() is called automatically afterwards with con: con.execute("insert into person(firstname) values (?)", ("Joe",)) # con.rollback() is called after the with block finishes with an exception, the # exception is still raised and must be caught try: with con: con.execute("insert into person(firstname) values (?)", ("Joe",)) except sqlite3.IntegrityError: print("couldn't add Joe twice") # Connection object used as context manager only commits or rollbacks transactions, # so the connection object should be closed manually con.close()
Footnotes
1(1,2)
The sqlite3 module is not built with loadable extension support by default, because some platforms (notably Mac OS X) have SQLite libraries which are compiled without this feature. To get loadable extension support, you must pass
--enable-loadable-sqlite-extensionsto configure.
License
© 2001–2021 Python Software Foundation
Licensed under the PSF License. | https://reference.codeproject.com/python3_lib/library/sqlite3 | CC-MAIN-2021-43 | refinedweb | 4,335 | 51.55 |
Tell us what you think of the site.
Hi skycastleMud/Wayne,
I’m trying to compile your source code Turntable and I have some issues. I’m using VS2008 Professional Edition x86. Qt 4.6.2 with moc compiler and OS is Windows XP Pro x64. I managed to setup successfully in VS2008. I used the QtCore4.lib and QtGui4.lib from Qt 4.6.2 because it will have a problem if I used Mudbox 2010 SDK. I compiled the TurntableDialog.h using moc and changed it to moc_TurntableDialog.cpp. I add this to the Generated Files folder. I don’t have any problem in compiling and linking. I managed to create the Turntable.mp file. I add this in the plugin folder of Mudbox. I open Mudbox 2010 32bit and see the Plug-ins/Turntable Movie. But after I clicked the Turntable Movie I encountered a problem. A window pop-up with these messages"An unknown error has occurred while performing the operation. We are sorry for the inconvenience.” I attached the zipped file that contains Qt 4.6.2 moc compiler, moc_TurntableDialog.cpp and Turntable.mp for references. Need your help with this problem. I think it would be better if you can provide a step-by-step procedure on how to configure in Visual Studio 2008 x32 and x64 the files, paths, libraries that is needed for the compiler and linker for 32bit and 64bit plug-ins. Also how to compile the header files in Qt moc compiler. What should be the proper version to use. So that any user who wants to create a plug-in will have no problem. Thanks in advance.
You may need to use Qt 4.5.2 instead of Qt 4.6.2
Yup you’re right. There is a bug in Qt 4.6.2 moc compiler when it creates a meta file for QMetaObject. I managed to fix the problem. You can still use the Qt 4.6.2 moc compiler but you have to change the following from the moc_TurntableDialog.cpp. See below.
1) Change revision from 4 to 2 and methods from 14 to 12.
QT_BEGIN_MOC_NAMESPACE
static const uint qt_meta_data_TurntableDialog[] = {
// content:
2, // revision changed from 4 to 2
0, // classname
0, 0, // classinfo
1, 12, // methods changed from 14 to 12
0, 0, // properties
0, 0, // enums/sets
0, 0, // constructors
// slots: signature, parameters, type, tag, flags
17, 16, 16, 16, 0x0a,
0 // eod
};
2) Commented these lines :
/ * #ifdef Q_NO_DATA_RELOCATION
const QMetaObject &TurntableDialog::getStaticMetaObject() { return staticMetaObject; }
#endif //Q_NO_DATA_RELOCATION */
3) Changed these lines :
const QMetaObject *TurntableDialog::metaObject() const
{
return QObject::d_ptr->metaObject ? QObject::d_ptr->metaObject : &staticMetaObject;
}
To these lines :
const QMetaObject *TurntableDialog::metaObject() const
{
return &staticMetaObject;
}
That’s all. | http://area.autodesk.com/forum/autodesk-mudbox/community-help/turntable-plug-in-source-code-compiled-and-linked-doesnt-work-using-qt-462-moc-compiler-on-vs2008-pro-x32/page-last/ | crawl-003 | refinedweb | 452 | 69.58 |
How to read specific lines from a file (by line number)?
If the file to read is big, and you don't want to read the whole file in memory at once:
fp = open("file")for i, line in enumerate(fp): if i == 25: # 26th line elif i == 29: # 30th line elif i > 29: breakfp.close()
Note that
i == n-1 for the
nth line.
In Python 2.6 or later:
with open("file") as fp: for i, line in enumerate(fp): if i == 25: # 26th line elif i == 29: # 30th line elif i > 29: break
The quick answer:
f=open('filename')lines=f.readlines()print lines[25]print lines[29]
or:
lines=[25, 29]i=0f=open('filename')for line in f: if i in lines: print i i+=1
There is a more elegant solution for extracting many lines: linecache (courtesy of "python: how to jump to a particular line in a huge text file?", a previous stackoverflow.com question).
Quoting the python documentation linked above:
import linecache linecache.getline('/etc/passwd', 4)'sys:x:3:3:sys:/dev:/bin/sh\n'
Change the
4 to your desired line number, and you're on. Note that 4 would bring the fifth line as the count is zero-based.
If the file might be very large, and cause problems when read into memory, it might be a good idea to take @Alok's advice and use enumerate().
To Conclude:
- Use
fileobject.readlines()or
for line in fileobjectas a quick solution for small files.
- Use
linecachefor a more elegant solution, which will be quite fast for reading many files, possible repeatedly.
- Take @Alok's advice and use
enumerate()for files which could be very large, and won't fit into memory. Note that using this method might slow because the file is read sequentially.
A fast and compact approach could be:
def picklines(thefile, whatlines): return [x for i, x in enumerate(thefile) if i in whatlines]
this accepts any open file-like object
thefile (leaving up to the caller whether it should be opened from a disk file, or via e.g a socket, or other file-like stream) and a set of zero-based line indices
whatlines, and returns a list, with low memory footprint and reasonable speed. If the number of lines to be returned is huge, you might prefer a generator:
def yieldlines(thefile, whatlines): return (x for i, x in enumerate(thefile) if i in whatlines)
which is basically only good for looping upon -- note that the only difference comes from using rounded rather than square parentheses in the
return statement, making a list comprehension and a generator expression respectively.
Further note that despite the mention of "lines" and "file" these functions are much, much more general -- they'll work on any iterable, be it an open file or any other, returning a list (or generator) of items based on their progressive item-numbers. So, I'd suggest using more appropriately general names;-). | https://codehunter.cc/a/python/how-to-read-specific-lines-from-a-file-by-line-number | CC-MAIN-2022-21 | refinedweb | 498 | 58.62 |
Tips and tricks on Microsoft SharePoint Technologies, Commerce Server and Content Management Server
It’s been a while since I last blogged, mainly because there has been many great articles written by all the Office 2007 System gurus out there. However, for the past couple of weeks I have been battling to find information on using Word 2007 Content Controls and custom XML. I am sure that many people will run into the same issues that I have had, and hence I thought it would be a good idea to document my learnings. I am certainly not the expert in this area, but you might find my stuff helpful.
In this blog I am going to focus on how to use a custom defined XML schema with Word 2007 Content Controls. Specifically I am going to look at how to map your custom XML to Word 2007 Content Controls without writing any code. I have seen several examples that use code, however I wanted to initially stay away from adding code to my Word 2007 template. In future blogs I will add to this scenario by looking at the coding options you have that can help manipulate the content of the document.
Note: I am using Word 2007 Beta 2.
Often businesses want a locked down template that prevents the user from changing the look, feel and layout of the document. Even though we could to some degree do this in previous versions of Word it was difficult and often the user could change the XML schema of the document which then made using the content as XML extremely difficult and error-prone for us programmers. This is where Content Controls and the XML datastore in Word 2007 come the rescue.
A Content Control is essentially a control that is added to a Word template or document that facilitates a user inputting content into a potentially locked down document / template. There are several that come with Word 2007 e.g. text control, rich text control, picture control, drop-down list control and my favorite the calendar control.
The datatsore is essentially an XML file that is stored inside the Word document. Remember that in Word 2007 the new file format is actually a series of files in single compressed (zipped) file. You can add one or more custom XML files into the Word document, these custom XML files allow you to inject or extract data from your document easily.
The following steps are going to show you how to:
Step 1: Create the Word 2007 Document and Add Content Controls
Create a new Word 2007 document and insert the appropriate Content Controls. Content Controls are added to the document by using the developer tab. If you don’t see the developer tab it should be enabled through Word Options. It is easier to work in Design Mode while working with Content Controls, so select the “Design Mode” button in the developer tab. The diagram below shows the developer tab with the content controls available.
It is a good idea to view the properties of your Content Controls so that you can give each control a friendly name and set is properties appropriately. The diagram below shows a very small example of a document with several content controls
The following controls were added:
Save the document and close Word. You are now going to add your custom XML file into the document.
Step 2: Add custom XML file into Word document
You should prepare your custom xml file and other supporting files in a temporary folder before adding them to the document. Specifically create the following folder structure and files.
customXml – folder, this is the root folder with the following directly inside
item1.xml – this contains your custom XML1.xml sample:
<projectDoc xmlns="">
<project>
<consultantName />
<region />
<dateAssigned />
<title />
<duration />
<summary />
</project>
</projectDoc>
The schema of this XML is entirely up to you. Note that I have added a namespace to the document, although it’s not strictly required it is of course best practice.
itemProps1.xml sample:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<ds:datastoreItem ds:
<ds:schemaRefs>
<ds:schemaRef ds:
</ds:schemaRefs>
</ds:datastoreItem>
The itemID should be a unique ID that you should choose, it will be used later to reference the datastore. I used VS .Net 2005 to generate a GUID. Also ensure your schema reference is that which is found in your custom XML file.
item1.xml.rels sample:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Relationships xmlns="">
<Relationship Id="rId1" Type="" Target="itemProps1.xml"/>
</Relationships>
Once this has all been prepared, change the file extension of your Word 2007 document from “.docx” to “.zip”. Open the zip file and copy the customXml folder (with all its contents) into the root of the zip file.
You now need to add a reference to your custom XML file to the Word 2007 document. In other words, just adding the customXML folder doesn’t finish the job, you need to tell Word that this custom XML file exists. To do this you need to copy the “word\_rels\document.xml.rels” file out of the zip file, modify it and add it back again.
document.xml.rels sample:
Insert the following relationship node into the file. You will need to change the id attribute to a value that isn’t already in use. For example, mine was changed to 8, as there was a 7 already.
<Relationship Id="rId8" Type="" Target="../customXml/item1.xml"/>
Once you have added this reference, add the file back into the zip file where it was copied from.
Step 3: Map Content Control to XML element in custom XML file
The final step that needs to be done is to map each Content Control to an element in the custom XML file that was inserted. To do this you need to copy the “word\document.xml” file from the zip file, modify it and add it back again. The document.xml file is the actual document content.
In this file you need to locate the content control that you wish to map. It will be imbedded in a <w:placeholder> element and shortly before this you will find an alias element which holds the friendly name for the Content Control.
For example, my Consultant Name controls has the following XML.
<w:alias w:
<w:id w:
<w:lock w:
<w:placeholder>
<w:docPart w:
</w:placeholder>
<w:showingPlcHdr/>
Immediately after the closing <w:showingPlcHdr/> element add a mapping element which maps the control to a node in the custom XML file. E.g.
<w:dataBinding w:
Note how there is a reference to the namespace being used in the custom xml file and a reference to the GUID that identifies the custom xml file. The Xpath, must be a valid expression that can reference an element or attribute value.
Do this for each control. Save the document.xml file and add it back to the zip file. That’s it, you can rename your zip file back to “.docx”.
Step 4: Test you work
You can now open your Word 2007 document and fill in the content controls. When you save your document the values from those controls will be added into your custom XML file. Test it.
Similarly, you can manually inject values into the custom XML file by renaming the document back to zip and inserting values into your custom XML file. When you open the document back in Word 2007 you will see those values in the Content Controls.
What Next?
Well you can see that the framework is in place to either inject or extract content based on business scenarios. For example; you can now add content to Word documents server side by injecting XML using code. Perhaps producing thousands of documents based on content from a customer database. In my following posts I will show how you can injecting or extracting XML from your documents via code, either server side or client side. We will also look at programmatically inserting datastore(s) and creating mappings. Lastly I hope to look at the integration of InfoPath into your Word 2007 document as a document properties panel and how this can impact your XML in the document.
You can download the sample Word 2007 document that was used in this post from here.
And just for interest sakes, I used Word 2007 to blog this post…cool!
If you would like to receive an email when updates are made to this post, please register here
RSS
While on the Ready tour, I've had lots of interest about the session I'm doing on the Ecma Office Open
Can the height of a content control be set to a particular value?
Category: Word programming Level: Advance Here is a small tip on how to add data to Word content control
PingBack from
Is there anyway to get the formating of the word document in XML file? What I mean by that is, I followed this example step by step and got to the XML file which do contain all the data coming from MS Word content controls but it loses all the formating (Bold, underscore, links etc...) is there any way to get the formating also?
I am working on a news web site where all there editors use Word document and I am trying to get the data out of those docs by using content controls and save it in database and then display on the web.
Any advice? suggesions?
Thanks much.
warraich@ur.msu.edu
Should this process work in the RTM (or SP1 version of Word?)
I tried this out - using the Contoso schemas - those might not be valid - but I was hoping I'd get this to work. Instead, I don't have any data/presentation separation. My customXML file is just a blank shell
It’s been a while since I last blogged, mainly because there has been many great articles written by all the Office 2007 System gurus out there. However, for the past couple of weeks I have been battling to find information on using Word 2007 Content
With the introduction of the Office Open XML Formats in the 2007 release, the process for programmatically using XSLT to generate Word 2007 documents has changed somewhat since the Office 2003 days. For those of you not interested in working with XSLT,
I followed the step by step procedure to add custom xml to the document. But aftter that when I open the document..
I get the following error
"The Office Open XML .docx cannot be opened because there are problems with the contents and details message is
"Microsoft cannot open this file because some parts are missing or invalid".
Please suggest the solution for the same. Thanks in advance.
The article is talking about a ZIP file but it doesn't explain what should be included in the ZIP file or how the XML file gets attached to the document?
Hi Chris,
The docx file is the zip file. i.e. rename the docx file to .zip and open up to have a look. I must also say that I wrote this post along time ago. So I am sure there is better content than this around somewhere, also considering I did this on beta Word 2007.
Look what I found, this is very nice and helpful ;) | http://blogs.msdn.com/modonovan/archive/2006/05/23/604704.aspx | crawl-002 | refinedweb | 1,907 | 69.52 |
Asked by:
Duplicate instances of barcode scanner under POS for .NET 1.14 and Windows 10
Question
We are encountering an issue with POS for .NET 1.14 and Windows 10.
The barcode scanner device will create duplicate entries in the device manager.
One instance of the hardware is created under "Human Interface Device" and one instance is created under "POS barcode scanner". This behavior did not occur under Windows 7/8. It seems to be a new device management issue with Windows 8.1 and Windows 10.
By default, the barcode scanner will not work correctly with windows.
However, if I go under the "Human Interface device" entry and find the hardware_id and manually disable it, then the pos for .net device will work correctly. I am assuming the two hardware entries are conflicting with each other.
Is there some reason for this behavior? It is a real pain for troubleshooting to have to remember to do this each time a new pc is configured.
image barcode1.png, barcode2.png and barcode3.png shows the 2 instances in the pos tester app.
barcode4.png shows the two instances of device in device manager.Monday, February 19, 2018 4:25 PM
All replies
- Is the SO being setup with an XML file that could have two entries in it?
Sean Liming - Book Author: Starter Guide Windows 10 IoT Enterprise - /, February 20, 2018 1:15 AM
- No, we do not use XML files for config.Wednesday, February 21, 2018 2:40 AM
How is the SO being configured? By the Honeywell utility? Or is this just the result of installation?
Is this a clean installation of Win 10 / 8.1 or an upgrade from Windows 7?
Microsoft has support for POS UWP applications in Win 10. I don't see how the driver for UWP would work with POS for .NET.
Sean Liming - Book Author: Starter Guide Windows 10 IoT Enterprise - /
Wednesday, February 21, 2018 8:26 PM
- Edited by Sean LimingMVP Wednesday, February 21, 2018 8:30 PM
- I just plugged in my Honeywell scanner and I get what you are seeing with Device Manager. Without the Honeywell SO installed, and just using the Example SO from the SDK, all I see is one scanner in the TestApp.
Sean Liming - Book Author: Starter Guide Windows 10 IoT Enterprise - /, February 21, 2018 8:37 PM
@Woodchux,
The scanner that you are using apparently exposes two HID interfaces and this is very common. POS for .NET will not use the driver associated with HID POS Scanner in device manager, but there is a passthrough to allow the SO to work properly.
Please provide details about the model of the Honeywell scanner that you are using and whether it has a separate base that allow you to use it in a wireless mode.
It would also be helpful to know how your OPOS service objects are configured under HKEY_LOCAL_MACHINE\SOFTWARE\OLEforRetail\ServiceOPOS in the system registry.
Terry Warwick
Microsoft
Saturday, February 24, 2018 7:39 PM
- Proposed as answer by Terry WarwickMicrosoft employee Monday, May 14, 2018 1:34 PM
- Unproposed as answer by Woodchux Saturday, May 19, 2018 4:49 PM
This occurs with all of the Honeywell 2d scanners (4600, 3310g, 1900, etc). We do not use wireless at all, they are all USB interfaces.
I dont have an issue with it creating/exposes two HID interfaces in device manager. The problem is that POS for .NET will not read a barcode or allow to open the scanner unless i disabled one of the interfaces in device manager. Then it works fine.
We have no registry key in the location you mentioned (there is nothing at that registry location). The only configuration we do is to drop the native service object dll file into the following folder:
C:\Program Files (x86)\Common Files\microsoft shared\Point Of Service\Control Assemblies
It appears that in Microsoft pos for .net test app, there are two instances listed. The first one i can open and claim, but when i scan nothing happens. The second instances I can open and claim and it scans correctly. So, i suspect the app is automatically "claiming" the first device it finds with the service object name. When i disable that first device in device manager, it starts to claim the second (working) instance. I'm not sure if this makes sense.
Is there some way to set the correct one as the "default" so it knows to claim that one? Since they both have the same name, i am not clear how to do this.Saturday, May 19, 2018 4:56 PM
@Woodchux,
We are having difficulty understanding exactly what is causing the problem that you are encountering. There is no "default", we need to track down what is creating the duplicate devices that you are seeing in the POS for .NET Test App. Can you please help us with some more details?
1. Please provide a link to the Service Object that you installed so that we can install the same.
2. Do you install any OPOS components on this PC?
3. If yes, do you have anything in the system registry under \HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\OLEforRetail? Please provide details.
Terry Warwick
MicrosoftMonday, June 11, 2018 3:12 PM
Terry,
1. see attached.
2. No.
3. There is nothing at that registry location.Tuesday, October 9, 2018 4:57 PM
@Woodchux,
It is possible to install multiple instances of a Service Object and it seems as though there is either an issue in this Service Object or the configuration. From your earlier post, this appears to be a POS for .NET Service Object for POS for .NET 1.14, however I am not able to find the equivalent on the Honeywell website. Since you are attempting to install a Honeywell Scanner and that Service Objects are distributed by the HW vendor, I will need to refer you to them for further assistance. If it is determined by Honeywell that there is an issue in POS for .NET causing this problem they can contact us directly to work out the issue.
Terry Warwick, MicrosoftTuesday, October 9, 2018 7:39 PM
@Terry
This is a fairly old native service object so its probably not supported by Honeywell. It is nice to use because it is so lightweight to install and maintain. As I discussed it works great on all legacy HHP and Honeywell devices. The only issue i have found is the duplicate instances in windows device manager that locks up the device unless i disable one in device manager. Then it works fine.
I pasted the relevant part of the code here in case you have any ideas what may be causing it to create multiple instances of the service object. Maybe it is something obvious. There is no configuration so it cannot be a configuration issue. This service object requires no configuration. It is plug and play.
using System; using System.Collections; using System.Collections.Generic; using System.Globalization; using System.Resources; using System.Reflection; using System.Runtime.CompilerServices; using System.Security.Permissions; using System.Text; using System.Threading; using Microsoft.PointOfService; using Microsoft.PointOfService.BaseServiceObjects; using System.IO; //This Service Object conforms to POS for .NET 1.14 namespace POS4NET.POSServiceObjects.Scanners { // [Externalized] //VID 0536 are legacy HHP scanners (e.g., 4600, 4800), //VID 0536 PID 0307 is the ImagePod 30206-001297E //VID 0536 PID 01c7 is the orange scanner HHP4800 or 30206-000709SE //VID 0536 PID 0467 is the orange scanner HHP4800i //VID 0536 PID 04C7 is the black 4600RSF [HardwareId(@"HID\Vid_0536&Pid_0000&Rev_0000", @"HID\Vid_0536&Pid_FFFF&Rev_9999")] //0C2E are the newer Honeywell scanners Xenon, Vuquest 3310 //VID 0C2E PID 0907 Xenon 1900HHD //VID 0C2E PID 0b67 Vuquest 3310g [HardwareId(@"HID\Vid_0C2E&Pid_0000&Rev_0000", @"HID\Vid_0C2E&Pid_FFFF&Rev_9999")] [ServiceObject(DeviceType.Scanner, "HoneywellScannerSO", "Service Object for Honeywell Barcode Scanners POS for .NET", 1, 14)] public class HandHeldScanner : ScannerBase { // Fields private bool autodisable = false; private string checkhealthtext = "OK"; private static byte[] CMD_HDR = new byte[] { 0x16, 0x4d, 13 }; private const int HANDHELD_VID = 0x218; private byte[] hhAimID; private byte hhCodeID; private byte[] hhScanData; private int hhScanDataLen; private IntPtr HidHandle = IntPtr.Zero; private USBReadThread ReadThread; private static byte[] SCANNER_DISABLE_CMD_STR = new byte[] { 0x16, 0x30, 13 }; private static byte[] SCANNER_ENABLE_CMD_STR = new byte[] { 0x16, 0x31, 13 }; private Hashtable scannerHash = new Hashtable(); //StringBuilder log = new StringBuilder(); // Methods public HandHeldScanner() {Saturday, October 13, 2018 3:55 PM | https://social.technet.microsoft.com/Forums/en-US/a5826127-2bd1-41f0-94b3-56170fce86ee/duplicate-instances-of-barcode-scanner-under-pos-for-net-114-and-windows-10?forum=posready | CC-MAIN-2020-50 | refinedweb | 1,403 | 65.62 |
Code covered by the BSD License
4.11111
by
Steven Michael
01 Mar 2005
(Updated
01 Apr 2008)
KD Tree range and nearest neighbor search.
|
Watch this File
This on i386 and x86_64 systems and Windows (i386).
This file inspired Efficient Kernel Smoothing Regression Using Kd Tree, Kdtree Implementation In Matlab, and Efficient K Nearest Neighbor Search Using Jit.
Dear all,
First thanks to the Author for a nice submition I am using often to work with lidar point clouds.
But now I wish to use kdtree_range with multiple boxes & single call (making a loop takes for ages!). This is, I define the "range" as 3D array. An example what I did:
>> r = rand(5,2);
>> tree = kdtree(r);
>> boxm
boxm(:,:,1) =
0 0
0.5000 0.5000
1.0000 1.0000
boxm(:,:,2) =
0.5000 0.5000
1.0000 1.0000
1.5000 1.5000
>> kdtree_range(tree,boxm)
Multple range input must have size (N,ndim,2)
But the boxm HAS the size (N - number of boxes, ndim - which is two, 2)!?!
Any suggestios how to make it work?
Many thanks in advance!
Maja
For one dimensional data I am searching, the range search gives some results which are outside the specified range by a tiny bit. e.g. 4-5 orders of magnitude relative to the search window size.
example for reproducing this : (takes a few minutes to complete run)
rng('default')
for i=1:1000
x=rand(100000,1);
tree=kdtree(x);
r=[0.3331 1/3];
idx=kdtree_range(tree,r);
found=x(idx);
if any(found<r(1))
disp(i)
disp(found(found<r(1))-r(1))
end
if any(found>r(2))
disp(i)
disp(found(found>r(2))-r(2))
end
end
I had some troubles compiling this on MacOSX Snow Leopard.
In order to compile correctly this package I just added the following line to the CXXFLAGS
" -undefined dynamic_lookup -bundle "
and also removed the other options specifically:
" -mtune=pentium4 -msse -msse2 -fPIC"
However, I doubt that keeping them would harm the compilation.
It is a very nice and well implemented package.
Many thanks for sharing it.
I just managed to compile mex files for this submission on windows 64.
It wasn't that easy, and I wasted a few good hours on that, but it works now.
I also admit I know almost nothing about compiling and so forth so there might be better ways to do it, but anyway these were my steps:
1. Installed MS Visual Studio 2010 (warning: it's not free, but luckily my helpdesk had a licensed copy)
2. Opened Matlab, typed
>> mex -setup
and went through the setup process.
3. opened the options file:
C:\Users\apaster\AppData\Roaming\MathWorks\MATLAB\R2011b\mexopts.bat
(find the right location by typing
and added the folder P:\Documents\MATLAB\kdtree\src that contained the source files of kdtree to the options file (lines 25-26):
set PATH=P:\Documents\MATLAB\kdtree\src;%VCINSTALLDIR%\bin\amd64;%VCINSTALLDIR%\bin;%VCINSTALLDIR%\VCPackages;%VSINSTALLDIR%\Common7\IDE;%VSINSTALLDIR%\Common7\Tools;%LINKERDIR%\bin\x64;%LINKERDIR%\bin;%MATLAB_BIN%;%PATH%
set INCLUDE=P:\Documents\MATLAB\kdtree\src;%VCINSTALLDIR%\INCLUDE;%VCINSTALLDIR%\ATLMFC\INCLUDE;%LINKERDIR%\include;%INCLUDE%
seems like if we don't do that, the compiler won't be able to find the kdtree.h file
4. from Matlab, typed:
>> cd 'P:\Documents\MATLAB\kdtree\src'
>> mex kdtree.cpp kdtree_create.cpp
>> mex kdtree_range.cpp kdtree.cpp
>> mex kdtree_closestpoint.cpp kdtree.cpp
5. That's it ! Now just copied the .mexw64 files to \kdtree\@kdtree folder.
Hope this works for you...
@sun jin:
copy the directory @kdtree to current working directory, then the examples in README.txt can be performed
@all:
in matlab 2010b, KDTreeSearcher class work fine for me. here is a simple example:
>> X = rand( 10, 2 ) ; % 10 center vectors
>> tree = kdtreesearcher( X )
>> T = rand( 3, 2 ) ; % 3 test vectors
>> [ k, d ] = knnsearch( tree, T , 'k', 1 )
The code is very well written. But the description is misleading. It says that "The tree can be queried for all points within a Euclidian range". But actually the range we specify is rectangular range in each dimensions.
This is a windows 64 bit make file, which seemed to work for me:
#######################################################################
# Makefile for MatlabCPP
#
#######################################################################
# MATLAB directory -- this may need to change depending on where you have MATLAB installed
MATDIR = C:\\Program Files\\MATLAB\\R2010a
INCDIR = /I "." /I "../../src" -I "$(MATDIR)/extern/include" -I"../Libs/"
CPP = cl
CPPFLAGS = /c /Zp8 /GR /W3 /EHs /D_CRT_SECURE_NO_DEPRECATE /D_SCL_SECURE_NO_DEPRECATE \
/D_SECURE_SCL=0 /DMATLAB_MEX_FILE /nologo /DWIN64
#CPPFLAGS = /c /Zp8 /MD /GR /W3 /EHs /D_CRT_SECURE_NO_DEPRECATE /D_SCL_SECURE_NO_DEPRECATE \#
# #/D_SECURE_SCL=0 /DMATLAB_MEX_FILE /nologo /D "CPP_ACCEPT_EXPORTS" /D_USERDLL /D_WINDLL \
# /DUNICODE /D_UNICODE /DWIN32
LINK = link
LINKFLAGS = /dll /export:mexFunction /MAP /MACHINE:X64 \
/LIBPATH:"$(MATDIR)\extern\lib\win64\microsoft" \
/LIBPATH:"../Libs/$(OUTDIR)" \
libmex.lib libmx.lib libmat.lib
INSTDIR = ./../../@kdtree/
# DEBUGBUILD = 1
!IF DEFINED(DEBUGBUILD)
OUTDIR = Debug/
CPPFLAGS = $(CPPFLAGS) /Fo"$(OUTDIR)" /DDEBUG
LINKFLAGS = $(LINKFLAGS) /INCREMENTAL /DEBUG
!ELSE
OUTDIR = Release/
CPPFLAGS = $(CPPFLAGS) /O2 /Oy- /Fo"$(OUTDIR)" /DNDEBUG
!ENDIF
.SILENT :
# Rules for making the targets
TARGETS = $(OUTDIR)kdtree.mexw64 \
$(OUTDIR)kdtree_closestpoint.mexw64 \
$(OUTDIR)kdtree_range.mexw64
all: $(TARGETS)
@copy $(OUTDIR:/=\)*.mexw64 $(INSTDIR:/=\)
@echo Files Built Successfully
clean:
@echo Cleaning output filder
@del $(OUTDIR:/=\)*.mexw64
@del $(OUTDIR:/=\)*.lib
@del $(OUTDIR:/=\)*.exp
@del $(OUTDIR:/=\)*.obj
@del $(OUTDIR:/=\)*.manifest
@del $(OUTDIR:/=\)*.map
@del $(INSTDIR:/=\)*.mexw64
rebuild: clean all
.SUFFIXES : mexw64
.SILENT :
# The below two lines don't seem to work -- I'll do it manually
#{$(OUTDIR)}.mexw64{$(OUTDIR)}.obj:
# $(LINK) $(OUTDIR)$< $(LINKFLAGS) /OUT:$(OUTDIR)$<.mexw32
{./../../src/}.c{$(OUTDIR)}.obj:
$(CPP) $(CPPFLAGS) $(INCDIR) $<
{./../../src/}.cpp{$(OUTDIR)}.obj:
$(CPP) $(CPPFLAGS) $(INCDIR) $<
$(OUTDIR)kdtree.mexw64 : $(OUTDIR)kdtree.obj $(OUTDIR)kdtree_create.obj
$(LINK) $(OUTDIR)kdtree.obj $(OUTDIR)kdtree_create.obj \
$(LINKFLAGS) /PDB:"$(OUTDIR)kdtree.pdb" \
/OUT:"$(OUTDIR)kdtree.mexw64"
$(OUTDIR)kdtree_closestpoint.mexw64 : $(OUTDIR)kdtree.obj $(OUTDIR)kdtree_closestpoint.obj
$(LINK) $(OUTDIR)kdtree.obj $(OUTDIR)kdtree_closestpoint.obj \
$(LINKFLAGS) /PDB:"$(OUTDIR)kdtree_closestpoint.pdb" \
/OUT:"$(OUTDIR)kdtree_closestpoint.mexw64"
$(OUTDIR)kdtree_range.mexw64 : $(OUTDIR)kdtree.obj $(OUTDIR)kdtree_range.obj
$(LINK) $(OUTDIR)kdtree.obj $(OUTDIR)kdtree_range.obj \
$(LINKFLAGS) /PDB:"$(OUTDIR)kdtree_range.pdb" \
/OUT:"$(OUTDIR)kdtree_range.mexw64"
??? Error using ==> class
The CLASS function must be called from a class constructor.
Hello, I am trying to do this:
>> r = rand(10,3);
>> tree = kdtree(r);
There is error
??? Error using ==> kdtree
Must have at least two input arrays.
How can I solve this?
Thank you
I have been trying to use this utilities for OSX under Matlab2008a.
In the Makefile I changed the compiler to standard g++ (g++4.3.1):
CXX = g++
Also I had to remove some of the CXX options to compile successfully:
-mtune=pentium4 -msse -msse2 -fPIC
The make clean didn't remove the symbolic link so an error would be generated if you tried to recompile the source.
Also I had to change several inclusion directives to comply with the standard in almost all the *.cpp files:
#include <mex.h> ==> #include "mex.h"
#include <kdtree.h> ==> #include "kdtree.h"
With these changes I tried to compile under linux, which with these changes was finally successfully. However, when I try to run the script provided as an example in README.txt I get the following:
>> r = rand(1000,3);
>> tree = kdtree(r)
??? Attempt to execute SCRIPT kdtree.kdtree as a function:
/cs/grad1/ata2/temp/kdtree 3/@kdtree/kdtree.m
I also tried to compile under OSX and i686-apple-darwin9-g++-4.0.1 and I get the following:
>> !make
g++ -Wall -O3 -fomit-frame-pointer -Isrc -I/Applications/matlab/extern/include -c src/kdtree_create.cpp -o kdtree_create.o
g++ -Wall -O3 -fomit-frame-pointer -Isrc -I/Applications/matlab/extern/include -c src/kdtree.cpp -o kdtree.o
ln -s @kdtree kdtree
g++ -Wall -O3 -fomit-frame-pointer -shared kdtree_create.o kdtree.o -o kdtree/kdtree.mexglx
Undefined symbols:
"_mxGetM", referenced from:
_mexFunction in kdtree_create.o
.....(many more of similar errors)
I finally give up using this utility...
At least it should have worked in Linux properly
please Post it a gain! The zip file does not contain enough files as described.
Thanks,
Paul
I think because I am using older Matlab version, 2006a, I cant run the program. I tried to use mex .cpp but still not successful. Anyone can help me for this?
Would it be possible to include a "NearestNeighborExceptThisPoint" function? For example, I need to find the nearest neighbor to each point in a set. So I build the kdtree, then the "nearest neighbor" to each point is itself!
I see some odd behavior when my data set has NaN members - it returns some random point as the best match rather than complaining. Other than that, this code was very helpful!
Excellent work. Very efficient. Better than the kdtree provided in MATLAB Image Processing Toolbox. Would you like to upgrade the single nearest point search to a k-NN search?
>> r = rand(1000,3);
>> tree = kdtree(r)
??? Error using ==> class
Class kdtree is not a valid class name.
Stellar(as usual)
Thanks+++++++++
It is a very good performance. But if I want to display the tree,how can I modify the programme. and if I want to count the searching times of this algorithm, how can I do.thank you.
Good, but supporting doubles would make it even better.
Absolutely perfect. Very well designed and packaged.
could you give your algorithms and time complexity analysis
Very good. Speeds up my program a lot
As Alaa said, great AND fast. Thank you, Steven!
Great and fast
A small but important change -- the border searching for the closest point function now compares |r|^2 instead of |r|. Removing the square root function siginificanly speeds up the closest point algorithm.
Fix memory leak in kdtree creation
Update such that the tree is serialized instead of stored in an abstract pointer. This means that the tree can be saved in a MATLAB file (or to disk) and loaded again quickly. Also, the implementation is now done using MATLAB classes.
Update so that kdtree_range searches can be done on multiple volumes with a single call. Also, the tree creation switches from using a quicksort to a heapsort -- seems to be a little faster.
1. Significantly speed up tree creation
2. Lower latency (removed tree "unserialization" for each function call)
3. fixed kdtree_closestpoint bug that returned incorrect points (but not indices) when querying closest point.
Fix an error that allocates too much memory when creating tree
Fix a memory error that sometimes causes incorrect results. | http://www.mathworks.com/matlabcentral/fileexchange/7030-kd-tree-nearest-neighbor-and-range-search | CC-MAIN-2014-10 | refinedweb | 1,724 | 59.9 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to fill in the default value by view, rather than models? [Closed]
I've extended the partner model as below.
from openerp import models, fields, api, exceptions
class Partner(models.Model):
_name = 'res.partner'
_inherit = 'res.partner'
@api.model
def _get_partner_category(self):
id = self.env['res.partner.category'].search([['name', '=', 'Registrant']]).id
return [(6,0,[id])]
national_id = fields.Char(string="National ID")
_defaults = {
'category_id': _get_partner_category,
}
As above, the code has made category_id and set the default tag to "Registrant".
However, this affect other modules that uses the res.partner model.
Can I set defaults on my views instead via the model?
Solution
Create a data xml that has a tag that I want.
<record model="res.partner.category" id="registrant_tag">
<field name="name">Registrant</field>
</record>
Now on my window action reference the to the tag with an eval option.
<record model="ir.actions.act_window" id="registrant_list_action">
<field name="name">Registrant</field>
<field name="res_model">res.partner</field>
<field name="domain">[('category_id.name','ilike','Registrant')]</field>
<field name="context" eval="{'default_category_id': [(6,0,[ref('warranty.registrant_tag')])] }"/>
<field name="view_type">form</field>
<field name="view_mode">kanban,tree,form</field>
<field name="help" type="html">
<p class="oe_view_nocontent_create">Add a registrant
</p>
</field>
</record>
Hello,
You can't set default value in View wise. However you can set default values in Window Action.
For example,
1. Go to Window Action menu, find "Countries" action and open it.
2. In context field, set "{'default_name':'India'}"
3. Go to Countries menu Sales >> Configuration >> Address Book >> Localization >> Countries and open it. Try to create new record. You will see name "India" by default.
Tip : In order to set default from front end, You need to add your exact field name followed by "default_" prefix in Window Action's context field.
Hope this helps,
It works partially. In the context field, I set "{'default_category_id':'[(6,0,[18])]'}" where'18' is the "Registrant" index. how do I assigned a function value to it? I tried "{'default_category_id': _get_partner_category}" it doesnt work.
Hello,
Your function _get_partner_category , catch the context that comes from the action, So you can add some custom value to the action's context just like:
{'my_menu': 'my_custom_menu'}
Then in your function check if the context contains the value 'my_menu' if yes then return the calculated value ...
Regards,
Sorry, but I still don't quite understand what you mean.? In Emipro's is already using an action and a context. Now you are suggesting to use another action ?
No, no another action. Empiro suggested to use the [ default_ ] in the action's context, to set the defaults value, but you can add a new value to the context. Then add some checks to your function. Just try to print the context in your function to get it . in your function just write: print self._context or self.env.context
Thanks Ahmed, I think I know where you are getting at but I think I'm quite satisfied with the tip Empiro provide me and manage to achieve what I | https://www.odoo.com/forum/help-1/question/how-to-fill-in-the-default-value-by-view-rather-than-models-closed-88450 | CC-MAIN-2017-04 | refinedweb | 532 | 51.85 |
table of contents
NAME¶
stpcpy - copy a string returning a pointer to its end
SYNOPSIS¶
#include <string.h>
char *stpcpy(char *dest, const char *src);
stpcpy():
- Since glibc 2.10:
- _POSIX_C_SOURCE >= 200809L
- Before glibc 2.10:
- _GNU_SOURCE
DESCRIPTION¶
The stpcpy() function copies the string pointed to by src (including the terminating null byte ('\0')) to the array pointed to by dest. The strings may not overlap, and the destination string dest must be large enough to receive the copy.
RETURN VALUE¶
stpcpy() returns a pointer to the end of the string dest (that is, the address of the terminating null byte) rather than the beginning.
ATTRIBUTES¶
For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶
This function may overrun the buffer dest.
EXAMPLES¶
For example, this program uses stpcpy() to concatenate foo and bar to produce foobar, which it then prints.
#define _GNU_SOURCE #include <string.h> #include <stdio.h> int main(void) {
char buffer[20];
char *to = buffer;
to = stpcpy(to, "foo");
to = stpcpy(to, "bar");
printf("%s\n", buffer); }
SEE ALSO¶
bcopy(3), memccpy(3), memcpy(3), memmove(3), stpncpy(3), strcpy(3), string(3), wcpcpy(3)
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://dyn.manpages.debian.org/unstable/manpages-dev/stpcpy.3.en.html | CC-MAIN-2021-49 | refinedweb | 229 | 73.47 |
GET_KERNEL_SYMS(2) Linux Programmer's Manual GET_KERNEL_SYMS(2)
NAME
get_kernel_syms - retrieve exported kernel and module symbols
SYNOPSIS
#include <linux/module.h>
int get_kernel_syms(struct kernel_sym *table);
Note: No declaration of this system call is provided in glibc headers;
see NOTES.
DESCRIPTION
Note: This system call is present only in kernels before Linux 2.6. ker-
nel.
VERSIONS
This system call is present on Linux only up until kernel 2.4; it was
removed in Linux 2.6.
CONFORMING TO
get_kernel_syms() is Linux-specific.
NOTES
This obsolete system call is not supported by glibc. No declaration is
provided in glibc headers, but, through a quirk of history, glibc ver-
sions before 2.23 did export an ABI for this system call. Therefore,
in order to employ this system call, it was sufficient to manually
declare the interface in your code; alternatively, you could invoke the
system call using syscall(2). 4.10 of the Linux man-pages project. A
description of the project, information about reporting bugs, and the
latest version of this page, can be found at.
Linux 2016-10-08 GET_KERNEL_SYMS(2) | http://man.yolinux.com/cgi-bin/man2html?cgi_command=get_kernel_syms(2) | CC-MAIN-2022-33 | refinedweb | 183 | 50.02 |
i have a webapp running on jboss-4.0.5.GA/apache-tomcat-5.5.20 that reads in
the http post request body and processes it.
i noticed that for request bodies that didn't contain line separators and that
had sizes that were exact multiples of
org.apache.catalina.connector.CoyoteReader.MAX_LINE_LENGTH (4096), i was
receiving null when calling org.apache.catalina.connector.CoyoteReader.readLine
().
i believe that the problem is at line 155 in
org.apache.catalina.connector.CoyoteReader, where on the last iteration through
the loop, "pos" does equal zero and null is returned even though data has been
aggregated.
here's a command to run in cygwin to easily reproduce the problem:
for requestSize in 4095 4096 4097 8191 8192 8193; do dd if=/dev/zero bs=1c
count=$requestSize | tr '\000' 'A' | curl --data-binary @- > $requestSize.txt; done;
output from directory listing (size filename):
4095 4095.txt
0 4096.txt
4097 4097.txt
8191 8191.txt
0 8192.txt
8193 8193.txt
here's the bulk of the servlet code i used to reproduce the problem:
public class DebugServlet extends HttpServlet {
protected void doPost(HttpServletRequest arg0, HttpServletResponse arg1)
throws ServletException, IOException {
BufferedReader br = arg0.getReader();
Writer writer = arg1.getWriter();
String line = null;
while ((line = br.readLine()) != null) {
writer.write(line);
}
writer.close();
br.close();
}
}
it appears that a workaround is to wrap the requests's input stream instead:
BufferedReader br = new BufferedReader(new InputStreamReader(arg0.getInputStream
()));
Created attachment 20405 [details]
Checks aggregator==null before returning null
I attached a patch to check that aggregator==null before returning null.
*** Bug 44060 has been marked as a duplicate of this bug. ***
I have reproduce similar behavior.
I cannot read anything from ServletRequest.getInputStream or getReader for
Tomcat ver 5.5.15+.
Any idea?
Regards
Mladen
Any chance of getting this fix in tomcat 5.5 ?
After a few hours of debugging my application, i finally discovered this bug was the root cause. It's still there in Tomcat 5.5.26.
Thanks for the patch. It has been applied to trunk and proposed for 6.0.x and 5.5.x.
This has been fixed in 5.5.x and will be included in 5.5.27 onwards. | https://bz.apache.org/bugzilla/show_bug.cgi?format=multiple&id=42727 | CC-MAIN-2019-09 | refinedweb | 372 | 61.12 |
Matthew,
2010/6/6 Matthew Bauer <mjbauer95@...>:
> When I run the command:
>>
>> gcc $(pkg-config --cflags --libs libgpod-1.0) -o tz ./get-timezone.c
>
> I get:
>>
>> ./get-timezone.c:37:42: error: dereferencing pointer to incomplete type
get-timezone.c isn't supposed to be compiled standalone, it's built as
part of libgpod build, so it needs a few changes when you want to
build it out of tree.
Christophe
tests/get-timezone.c doesn't compile for me.
#include <itdb.h>
was changed to
> #include <gpod/itdb.h>
and
#include <itdb_device.h>
had to be deleted.
When I run the command:
> gcc $(pkg-config --cflags --libs libgpod-1.0) -o tz ./get-timezone.c
I get:
> ./get-timezone.c:37:42: error: dereferencing pointer to incomplete type | https://sourceforge.net/p/gtkpod/mailman/message/25460877/ | CC-MAIN-2018-09 | refinedweb | 130 | 72.83 |
Data structure for an event.
More...
#include <events.h>
List of all members.
Data structure for an event.
A pointer to an instance of Event can be passed to pollEvent.
Definition at line 197 of file events.h.
[inline]
Definition at line 241 of file events.h.
Definition at line 223 of file events.h.
Joystick data; only valid for joystick events (EVENT_JOYAXIS_MOTION, EVENT_JOYBUTTON_DOWN and EVENT_JOYBUTTON_UP).
Definition at line 239 of file events.h.
Keyboard data; only valid for keyboard events (EVENT_KEYDOWN and EVENT_KEYUP).
For all other event types, content is undefined.
Definition at line 213 of file events.h.
True if this is a key down repeat event.
Only valid for EVENT_KEYDOWN events.
Definition at line 207 of file events.h.
The mouse coordinates, in virtual screen coordinates.
Only valid for mouse events. Virtual screen coordinates means: the coordinate system of the screen area as defined by the most recent call to initSize().
Definition at line 221 of file events.h.
Definition at line 226 of file events.h.
Mouse movement since the last mouse movement event.
This field is ResidualVM specific
Definition at line 233 of file events.h.
The type of the event.
Definition at line 200 of file events.h. | https://doxygen.residualvm.org/d8/d84/structCommon_1_1Event.html | CC-MAIN-2020-40 | refinedweb | 204 | 71.82 |
Language - Python Windows LiClipse_0<<
Add PhidgetHelperFunctions.py to the project by dragging it into the project.
Finally, run the.
Setting Up a New Project
When you are building a project from scratch, or adding Phidget functionality to an existing project, you'll need to configure your development environment to properly link the Phidget Python library.
To start, you need to create your project as shown in the previous section.
To include the Phidget Python library, add the following line to your code:
from Phidget22.PhidgetException import * from Phidget22.Phidget import *
Then, you will also have to add a reference to your particular Phidget. For example, you would include the following line for a DigitalInput:
from Phidget22.Devices.DigitalInput import *
The project now has access to Phidgets.. | https://www.phidgets.com/docs/index.php?title=Language_-_Python_Windows_LiClipse&oldid=29597 | CC-MAIN-2020-05 | refinedweb | 126 | 59.3 |
Seems like I've been exploring a lot these days. That means I've to settle down and build actual projects. So, this time it's the all popular React. I must say, I imagine writing components for every little thing that needs to get done. At the end, My project would have >= 80 JS files. 😅 Just Kidding.
But that's just imagination. Right now, I'm still learning the ropes. I understand that components are created either by:
- Using class syntax
- Using functions which receive props as arguments After creating components, they are rendered using
ReactDOM.render(<Component />, desired-node)
Something a little more real
Let's assume we have a div with id = "test node"If we want to fix stuff in it using react, it would probably go like this.
class MyComponent extends React.Component { constructor (){ super() } render (){ return ( <div> <h1>Just another boring text in a component</h1> </div> ) } }; ReactDOM.render(<MyComponent />, document.querySelector('#test-node'))
I hope I stick to this thing called React. --There are a million websites trying to teach me react--. I probably stick to it.
Discussion | https://practicaldev-herokuapp-com.global.ssl.fastly.net/vicradon/exploring-react-2c4n | CC-MAIN-2021-04 | refinedweb | 185 | 59.4 |
Word's table feature is powerful, but it seems to confuse users. Here are a few easy options for controlling Microsoft Word tables.
Word's table feature is powerful, but it seems to confuse users. Frankly, there's a lot to know, so it's easy to understand why tables sometimes frustrate us. The following tidbits are just a few table options that seem to trip up most people.
Split a table
Splitting an existing table into two tables is easier than you might think. Simply click in the row that will become the first row of the second table. Then, choose Split Table from the Table menu. In Word 2007/2010, click the Layout (context) tab. Then click Split Table in the Merge group.
Display a heading
If a table extends beyond a page, or across several pages, you might want to display the table's header row at the top of each page. To do so, select the header row(s) and then choose Heading Rows Repeat from the Table menu. In Word 2007/2010, click the Layout (context) tab. Then, click Repeat Header Rows in the Data group.
I find this particular option a bit buggy. If you're formatting a new table, it'll probably work for you. However, if you're working with an existing table, Word might ignore you. If this happens, copy the table into a blank document and then copy it back into the original document. Then set the repeating rows. Or, select the table and disable the repeating rows option before trying to set it.
Prevent orphans
By default, a single item can extend past the bottom of a page and onto the next page—technically, that's an orphan. To avoid orphans, select the table and choose Table Properties from the Table menu. In Word 2007/2010, click inside the table and click the Layout (context) tab. Then, click Properties in the Table group. On the Row tab, uncheck the Allow Row To Break Across Page option in the Options section, and click OK.
Quick numbering
To number items in a column, select the column and click Numbering on the Formatting toolbar or click Numbering on the Home tab. To number items in a cell, highlight everything in the cell and click Numbering. This works only if you separate items in a cell using a hard return (press Enter).
What's your favorite table trick?
TechRepublic's Microsoft Office Suite newsletter, delivered every Wednesday, is designed to help your users get the most from Word, Excel, and Access. Automatically sign up today! | https://www.techrepublic.com/blog/microsoft-office/4-simple-tricks-for-manipulating-word-tables/ | CC-MAIN-2020-24 | refinedweb | 432 | 74.79 |
Unable to import binding
Discussion in 'ASP .Net Web Services' started by robert.p.king@gmail.com, Dec 8,35
- Jordan
- Feb 10, 2004
GridView binding - how to stop initial bindingAmit, Apr 29, 2006, in forum: ASP .Net
- Replies:
- 6
- Views:
- 13,806
- Assimalyst
- Oct 24, 2006
WSDL failing to parse Unable to import bindingBesharam, Oct 1, 2003, in forum: ASP .Net Web Services
- Replies:
- 0
- Views:
- 280
- Besharam
- Oct 1, 2003
Unable to import Binding "xxx" from namespace "xxx" while running the WSDL tool for a Java Webservic, Jun 21, 2006, in forum: ASP .Net Web Services
- Replies:
- 0
- Views:
- 1,026
Unable to import bindingimonline, Nov 16, 2006, in forum: ASP .Net Web Services
- Replies:
- 4
- Views:
- 845
- John Saunders
- Nov 18, 2006 | http://www.thecodingforums.com/threads/unable-to-import-binding.785633/ | CC-MAIN-2014-41 | refinedweb | 125 | 71.24 |
node.js firstly is a (unexpectedly) spawned server side execution of Google's V8
JavaScript engine.
The V8 engine is said to be also used by the Chrome browser. node.js may be compared to something like
VB runtime enabling VBScript.
So this enables JavaScript to be used on server-side.
All browsers have a
JavaScript engine which nobody seems to have thought
of making into a server-side engine which will allow JavaScript enthusiasts code on the server-side. Now node.js is popular, and Microsoft even
supports node.js scripting on their Azure cloud systems (could help tap
JavaScript coders to use Azure).
This article is to quickly dive into some examples of using node.js
Type the below line in a text file, preferably named helloworld.js
console.log('Hello world!');
and run it on the command line with node engine
node c:\work\helloworld.js
To do the above.. the pre-requisite is you have
node.js installed.. (It is similar to installing a desktop application)
In essence of JavaScript being very good for AJAX.. the same seems to be the most popular use of node.js as well.. making requests or serving requests.
Though other uses are possible.. most of the samples you find will be to make or respond to server requests.
We will create some examples of them.
The below code saved in a file like test.js and executed with node, will do a
HTTP request to fetch a page and just print the results.. the HTML.
var http = require('http');
var options = {
host: '',
port: 80,
path: '',
method: 'GET'
};
var req = http.request(options, function(res) {
console.log('STATUS: ' + res.statusCode);
console.log('HEADERS: ' + JSON.stringify(res.headers));
res.setEncoding('utf8');
res.on('data', function (chunk) {
console.log('BODY: ' + chunk);
});
});
req.on('error', function(e) {
console.log('problem with request: ' + e.message);
});
req.write('data\n'); //this line prints first, while the request and the content
//printing happens a bit slow as its an internet call.
req.end();
The style of script is just the same as JavaScript coding with variables, objects, and namespaces. Yes.. that, and experience with jQuery like library
usage, is a pre-requisite for readers of this article.
node.js has inbuilt interfaces exposing support for
HTTP functions, JSON, etc.,
which we have used above. Some other details on the above HTTP request example:
http.request - makes the request and its second parameter is a callback
event handler in javascript callback handlers wiring style
JSON.stringify - yes, JSON is inbuilt into the node engine.
Events handlers are wired, as below.
res.on('data', callback fn);
where the response object fires the data event every time it receives data after the http request is raised.
Other than inbuilt interfaces, node.js can also use libraries like the
noderss, htmlparser, etc., These libraries can be installed using npm.
npm - is node's package manager.. (I guess it might be using the open-source rpm -Redhat
Linux package manager internally and so the name.. but that's a guess. Somebody let me know if I am right)
Installed libraries will be in a folder 'node_modules' in your node installation directory or the user directory. Under 'documents and settings' on a windows machine,
for example c:\Documents and Settings\username\Application Data\npm\node_modules.
node_modules
documents and settings
There are numerous node capable libraries added on github by the day... so node.js's capabilities are being improved by the day or if you may, by the hour.. so you may not
necessarily wait for another version of node.js to be released to write better code. Another reason for saying that is, since it is
JavaScript based, the scripting language being a public standard.. the engine itself might not need/have improvements unless
JavaScript versions upgrade.
(Another guess.. I think libraries for node are written in
JavaScript.. guessing from what I have seen inside node_modules folder. Am I right? So every good
JavaScript developer should be able to create libraries for node? Let me know.)
This next example, uses node's inbuilt support of
HTTP and file system functions, and an external library xml2js, which you have to install with npm.
Type below command on your command prompt to install xml2js
npm install xml2js
npm install xml2js
or
npm install --global xml2js
npm install --global xml2js
Troubleshooting 'xml2js not found' errors.
I personally didn't have success using xml2js unless I copied the xml2js package into the root folder of my .js project folder.
var http = require('http'),
fs = require('fs'),
xml2js = require('xml2js');
////////// code to get xml from url
var options = {
host: '',
port: 80,
path: '/WebServices/ArticleRSS.aspx',
method: 'GET'
};
var req = http.request(options, function(res) {
console.log('STATUS: ' + res.statusCode);
console.log('HEADERS: ' + JSON.stringify(res.headers));
res.setEncoding('utf8');
var fullResponse = "";
res.on('data', function (chunk) {
//console.log('BODY: ' + chunk);
fullResponse = fullResponse+chunk;
});
res.on('end', function(){
parser.parseString(chunk);
console.log(item.title + '\n');
});
});
req.on('error', function(e) {
console.log('problem with request: ' + e.message);
});
/////////xml load example from xml from file
fs.readFile(__dirname + '/ArticleRSS.aspx.xml', function(err, data) {
parser.parseString(data);
jsonObj.channel.item.forEach(function(item){
console.log(item.title + '\n');
});
console.log('Done');
process.exit(0);
});
//code to parse xml
var jsonstring = "";
var jsonObj;
var parser = new xml2js.Parser();
parser.addListener('end', function(result) {
jsonstring = JSON.stringify(result);
jsonObj = JSON.parse(jsonstring);
console.log('Done');
});
xml2js usage is explained
here.
The above code gets the content
of the CodeProject article RSS feed, calls a handler to parse the XML, to extract or print the title of all nodes in the RSS.
shows an example use of forEach
forEach
shows an example of forcefully stopping node program with process.exit(0)
process.exit(0)
shows examples for loading an rss feed from a filesystem file and from a web url.
Technically yes. You may try installing the npm and start using
it.
npm install jquery
npm install jquery
I couldn't, because some dependencies were failing for me.. still troubleshooting.
This is a service, which accepts a search string, searches codeproject.com and returns the results as an RSS feed.
It is possible to write entire server side applications, service layers etc., with node.js. Some
SOAP + XML external libraries available on npm repository allow you that.
To keep it easy for now (since doing proper
WSDL like service for this example's purpose seemed needing more code and testing) I built a quick REST service all with a server... all in
JavaScript. Remember, no need for web server, socket programming, etc.
In the below example, we are creating and hosting a web server handling REST service requests
and of course the inbuilt HTTP methods. I have written all the code with callback handlers.. so the methods don't return anything, so no line of code has to wait to use the returned values. Though this style of coding, makes the readability of the code a bit complex, it allows keeping what node.js is good at.. non-blocking operations. Yes,
JavaScript developers will be implementing parallelism, without writing any sockets or having to do anything special for implementing async execution.
var http = require('http'),
rss = require('rss'),
qs=require('querystring'),
htmlparser = require("htmlparser"),
htmlselect = require('soupselect');
/////////// Create and start the server to handle requests
http.createServer(function (request, response) {
if (request.url.indexOf('searchcp')==-1)
{
response.writeHead(200, {'Content-Type': 'text/html'});
response.end('Invalid call\n');
}
else
{
//tell the client that return value is of rss type
response.writeHead(200, {'Content-Type': 'application/rss+xml'});
//extract the searchquery from the querystring.. this is a REST service
var str_querystring = request.url.substring(request.url.indexOf('searchcp'));
var searchString = qs.parse(str_querystring).searchcp;
console.log('\nsearch string:'+qs.parse(str_querystring).searchcp);
//method call to process the search and print rss
processquery(searchString, response);
}
}).listen(8080);
console.log('Server running at');
function processquery(searchString, responseObj)
{
// code to get xml
var options = {
host: '',
port: 80,
path: '/search.aspx?q='+encodeURIComponent(searchString)+'&sbo=kw',
method: 'POST'
};
var handler = new htmlparser.DefaultHandler(function (error, dom) {
if (error)
{}
else
{
var cpsearchresults=[];
//get nodes in html that we are interested in
//and loop through each node.. each being one searh result item
//of codeproject search result html..
//**depends on codeproject html not having changed
//**since this time when I tested.
htmlselect.select(dom, "div.result").forEach(function (ele){
tmpTitle = htmlselect.select(ele, "span.title a");
if (tmpTitle != undefined){
if (tmpTitle[0] != undefined) {
var itemTitle=""; itemURL="";
try{
itemTitle = tmpTitle[0].children[0].data;
itemURL = tmpTitle[0].attribs.href;
if (itemTitle != "" && itemURL != "")
{
tmpObj = {Title:itemTitle, URL:itemURL};
cpsearchresults.push(tmpObj);
//print(cpsearchresults[cpsearchresults.length-1].Title);
}
}
catch(err)
{
//print('Err:'+err);
//skip record and continue
}
}}
});
///////////// Generate RSS feed
var feed = new rss({
title: 'Codeproject search result Sample RSS Feed',
description: 'Code project search Results RSS feed through node.js sample',
feed_url: '',
site_url: '',
author: 'You'
});
/* loop over data and add to feed */
cpsearchresults.forEach(function(item){
//print(item.Title);
feed.item({
title: item.Title,
url: ''+item.URL
});
});
//Print the RSS feed out as response
responseObj.write(feed.xml());
responseObj.end();
}
});
var html_parser = new htmlparser.Parser(handler);
var req = http.request(options, function(res) {
console.log('STATUS: ' + res.statusCode);
//console.log('HEADERS: ' + JSON.stringify(res.headers));
res.setEncoding('utf8');
var alldata = "";
res.on('data', function (chunk) {
alldata = alldata+chunk;
});
res.on('end', function(){
html_parser.parseComplete(alldata);
});
});
req.on('error', function(e) {
console.log('problem with request: ' + e.message);
});
req.write('data\n');
req.end();
}
function print(value)
{
console.log('\n'+value);
}
The above example is similar to the previous example until the step where it fetches content .. previous example fetched an RSS, this fetches a html page
Then it extracts + loops through search result nodes in the html of
CodeProject's search page using the
soupselect library which is documented here
Then create an array of search result titles + links.
Then converts that array into
RSS feed, using
node.js rss library and prints out as html response.
node.js executes line1 and goes to line2.. without waiting on line1 even, if say line1 makes
a call to a method which takes time to complete.
I am not sure if it is even possible to forcefully make node wait on a line... if at some point
of implementation you even require that. (Is it possible.. anybody? Let me know.)
That's it for. | https://www.codeproject.com/Articles/339740/Know-JavaScript-use-node-js-server-side?fid=1692994&df=10000&mpp=50&sort=Position&spc=None&noise=5&prof=True&view=None | CC-MAIN-2017-30 | refinedweb | 1,732 | 52.46 |
We are interested to use this library and gto a few questions, where can we post?
10x,
Alex
Is there a simple way to use the library to run an SQL statement? We need this feature as a temporary fallback in case the library doesn't cover a specific SQL syntax requirment.
Well, it's better to use this one (especially if you're using SQL):. We've put quite a lot of work into it after it's been forked from Andrey's prototype. The downside is the documentation is not up to date but we're going to address that soon. Back to your question: Assuming you have transaction in scope you acquire its JDBC connection by calling
val connection = Session.get().connection
and then just use any JDBC APIs of your liking.
Regards,
Max
My fork of Exposed () is published under the kotlinx.* namespace:
Using a repo:
maven { url '' }
org.kotlinx:kotlinx.sql:0.10.4-SNAPSHOT
Note that the version name matches the Kotlin version we compiled against (and any subversion would add another '.') ... Or you can use that version to build yourself using Gradle. Any libraries we use from Kotlin, we clean up for Gradle, add to our build server, and publish for others ... hopefully this gets taken over by the library authors.
(yes, I issued a pull request to update this in the original) | https://discuss.kotlinlang.org/t/exposed/375 | CC-MAIN-2018-30 | refinedweb | 230 | 73.88 |
You are right - I was wrong! Sorry!
In actuality, the process can be described as pseudo-validation. The IE5
parser seems to process the stylesheet as it parses the document - the
document will display the stylesheet attributes as is parses, up to any
errors which cause the processing to stop, rather than parse the document,
then the stylesheet.
-----Original Message-----
From: Jonathan Marsh [mailto:jmarsh@xxxxxxxxxxxxx]
Sent: Friday, December 18, 1998 11:32 AM
To: 'xsl-list@xxxxxxxxxxxxxxxx'
Subject: RE: Does the XSL processors within IE5beta2 validate a XSL
style sheet?
The IE5 XSL processor does not validate the stylesheet against a DTD.
Indeed, since XSL relies heavily on namespaces, I don't think this is
possible.
The IE5 XSL processor does leverage the IE5 XML parser to load the
stylesheet, thus reporting well-formedness errors, and even reporting
validation failures against a DTD or Schema if that is specified in the XSL
file. After parsing and any validating requested by the author of the
stylesheet, the XSL processor compiles the stylesheet for execution. Errors
reported such as the <xsl:process-children/> one below are reported at this
stage, thus they are really compiler errors rather than true validation
errors. Runtime errors (including script errors) are reported while the
stylesheet is being run.
> -----Original Message-----
> From: Markor, John (Non-HP) [mailto:jmarkor@xxxxxxxxxx]
> Sent: Friday, December 18, 1998 10:24 AM
> To: 'xsl-list@xxxxxxxxxxxxxxxx'
> Subject: RE: Does the XSL processors within IE5beta2 validate a XSL
> style sheet?
>
>
>.
>
>
> -----Original Message-----
> From: Amit Rekhi [mailto:amitr@xxxxxxxxxxxxx]
> Sent: Friday, December 18, 1998 3:19 AM
> To: xsl list
> Subject: Does the XSL procesor within IE5beta2 validate a XSL
> stylesheet?
>
>
> Hello,
>
> I was trying to render an XML file using an XSL stylesheet
> in IE5beta2,
> and discovered that the XSL processor within IE5beta2 was
> not only checking
> for well-formedness of the XSL file but was also trying to validate
> the XSL stylesheet. for eg.
>
> A sample XSL stylesheet
>
> 1. <xsl:stylesheet xmlns:
> 2. <xsl:template
> 3. <HTML>
> 4. <BODY BGCOLOR="#FFFFCC">
> 5. <xsl:process-children/>
> 6. </BODY>
> 7. </HTML>
> 8. </xsl:template>
> 9. <xsl:template
> 10. <B>Hi!</B>
> 11. </xsl:template>
> 12.</xsl:stylesheet>
>
>
> When I try to link the above XSL file to an XML file and try
> running the XML thru IE5, it gives an error saying
>
> "Keyword xsl:process-children may not be used here."
> Here = Line No. 5 above
>
> This gives me the impression that IE5 may be validating an
> XSL file against
> a predefined XSL DTD before rendering it.
> Am I right?
> If yes
> * where can I get the DTD from?
>
> Why is it that <xsl:process-children/> cannot be a part of
> the the <BODY>
> element
> content?
> Is the content model for <BODY> defined in some XSL DTD which
> IE5 validates
> an XSL file against?
>
>
> Thanks,
> AMIT
>
>
>
> XSL-List info and archive:
>
>
> XSL-List info and archive:
>
XSL-List info and archive:
XSL-List info and archive: | http://www.oxygenxml.com/archives/xsl-list/199812/msg00269.html | crawl-002 | refinedweb | 493 | 60.75 |
Welcome to the Parallax Discussion Forums, sign-up to participate.
Mult10 mov Answer,X ‘ Multiply X by 10
shl Answer,#2 ‘ First multiply Answer by 4
add Answer,X ‘ Add X for multiply by 5
shl Answer,#1 ‘ Multiply by 2 for 10
frank freedman wrote: »
There is an excellent work by Sridhar Anandakrishnan called "Propeller Programming in Assembly, Spin and C.
pilot0315 wrote:
I go back to the IBM 1130 ...
Int LDA Sub save return
STA MySave
BRM Sub
LDA MySave restore
STA Sub
tomcrawford wrote: »
The SDS 900 series (and big brother 9300) stored the return address (and the overflow bit) in the first word of the subroutine.
This led to obvious problems when calling sub-routines from the background and an interrupt (or from multiple interrupts). You had to
Int LDA Sub save return
STA MySave
BRM Sub
LDA MySave restore
STA Sub
Of course, this isn't a problem with Prop PASM since it does't have interrupts.
Yes, call and return are not intuitive, but they are simple and done with a jump/return (JMPRET) and jump (JMP) instruction. The assembler also has the CALL and RETURN “macros” to simplify the work.
The JMPRET instruction sets the program counter to the source address of the instruction effectively doing a jump operation. As a “side effect”, it stores the old contents of the program counter (the “return address”) in the source address portion of the long/instruction at the destination address of the JMPRET. If that long/instruction is a JMP instruction, it works to provide a “return” to just after the CALL or JMPRET. If that long/instruction is a JMPRET instruction, you have a mechanism for doing coroutines (an advanced topic).
PASM is slightly different from normal assemblers, like everything on the P1 is slightly different form other MCs.
DeSilva is very deep in the details, to begin with it might be better to just look thru the proptool examples.|
But you asked for basics and I will try to explain some small hurdles you have to take, mentally.
First of all, in opposite to other MCs your PASM program does NOT run in HUB ram, it gets just loaded from there into the COG and the copy in COG ram gets executed.
Second concept to wrap your head around is that each COG ram location is also a register. So in opposite to other MCs you do not have some x registers and a ram for executing code, you are basically executing your code inside of your 512 registers.
So every instruction in your source code ends up in a ram location you are able to access as a register also. That leads to interesting possibilities for self modifying code, often used to save space.
The next hurdle is that the P1 does neither has a stack nor a stack-pointer-register.
CALL and RET simulate a one level stack by effectively saving the return address at the ram/register of the RET instruction when the CALL gets executed. Works perfectly well but recursion is not supported and needs to be handled different.
Like Mike Green said the JMPRET instruction does all the magic, but it is more easy to use CALL and RET until you have thee need to use JMPRET direct.
I am not sure if this is what you called entry and exit, but there is some other concept to wrap your head around.
In other MCs you call a assembler routine from say your basic program and then return to your program after the assembler routine finished.
With the P1 one usually works completely different. You start a assembler-blob in a COG running in a endless loop, checking some HUB-address used as a Mailbox between parallel running processes.
Your main program writes a command into the mailbox and does what it usually does or waits for a answer from the second process if needed.
The PASM-Program in the second COG/process now reads the command in the mailbox, does what needs to be done and
confirms via the mailbox the result.
So the entry is basically starting the parallel running COG and the exit would be to stop that parallel running COG.
Any 'call' to use the parallel running COG is usually done by using HUB addresses as Mailbox, known to caller and callee.
Writing this down it seems to be very complicated, but in fact it is a very simple way to handle real parallel processes, compared to time-slices used with interrupts.
One has just to - hmm - do things a little bit different on the Propeller, but then it shines... the grass is greener on the other side...it's time to water your lawn.
I gave up on that half long ago.........
Frank, I have the Sridhar book found it online. Helpful but still complex for the beginning stuff.
I am in in touch with Sridhar, he seems to be a nice guy and is trying to help.
Trying to understand the basic math of multiply and division etc. If you or anyone has a good simple explanation of the add, sub shift methods that would be great.
Bit shifting math was so long ago!!!
Thanks
Thanks for this information - I had not heard of this book.
"We keep moving forward, opening new doors, and doing new things, because we're curious and curiosity keeps leading us down new paths." - Walt Disney
He assumes like many others that they are talking to engineers who have had experience in ASSEMBLY like IBM. I took that class 30 years ago plus. So I have to start over.
I have found the book to be very useful, but if someone does not have a bit of pasm, they can still gain a lot from this book as a tutorial AND a guide to what you may need to research in the forums or parallax documents. There was a rather fiery thread a few years ago that if you filter the slag, has quite a bit of gold in it from the best of the best at that time. But then again, assembler is not for the novice. In any hardware. As someone else said on the forums, they have never seen a CS101 using assembler. If someone wants a skill bad enough, they will do what they need to acquire said skill..........IM(not so)HO
I gave up on that half long ago.........
BTW Alles ist in orgnung!
I gave up on that half long ago.........
-Phil
This led to obvious problems when calling sub-routines from the background and an interrupt (or from multiple interrupts). You had to
Of course, this isn't a problem with Prop PASM since it does't have interrupts.IRC the IBM1401 SPS did the same thing, although I am basing that on using a 1401 emulator that was running on a Collins 8400 system.
Life is unpredictable. Eat dessert first. | http://forums.parallax.com/discussion/comment/1444292/ | CC-MAIN-2019-18 | refinedweb | 1,162 | 69.82 |
Loss functions in Python are an integral part of any machine learning model. These functions tell us how much the predicted output of the model differs from the actual output.
There are multiple ways of calculating this difference. In this tutorial, we are going to look at some of the more popular loss functions.
We are going to discuss the following four loss functions in this tutorial.
- Mean Square Error
- Root Mean Square Error
- Mean Absolute Error
- Cross-Entropy Loss
Out of these 4 loss functions, the first three are applicable to regressions and the last one is applicable in the case of classification models.
Table of Contents
Implementing Loss Functions in Python
Let’s look at how to implement these loss functions in Python.
1. Mean Square Error (MSE)
Mean square error (MSE) is calculated as the average of the square of the difference between predictions and actual observations. Mathematically we can represent it as follows :
Python implementation for MSE is as follows :
import numpy as np def mean_squared_error(act, pred): diff = pred - act differences_squared = diff ** 2 mean_diff = differences_squared.mean() return mean_diff act = np.array([1.1,2,1.7]) pred = np.array([1,1.7,1.5]) print(mean_squared_error(act,pred))
Output :
0.04666666666666667
You can also use mean_squared_error from sklearn to calculate MSE. Here’s how the function works:
from sklearn.metrics import mean_squared_error act = np.array([1.1,2,1.7]) pred = np.array([1,1.7,1.5]) mean_squared_error(act, pred)
Output :
0.04666666666666667
2. Root Mean Square Error (RMSE)
Root Mean square error (RMSE) is calculated as the square root of Mean Square error. Mathematically we can represent it as follows :
Python implementation for RMSE is as follows:
import numpy as np def root_mean_squared_error(act, pred): diff = pred - act differences_squared = diff ** 2 mean_diff = differences_squared.mean() rmse_val = np.sqrt(mean_diff) return rmse_val act = np.array([1.1,2,1.7]) pred = np.array([1,1.7,1.5]) print(root_mean_squared_error(act,pred))
Output :
0.21602468994692867
You can use mean_squared_error from sklearn to calculate RMSE as well. Let’s see how to implement the RMSE using the same function:
from sklearn.metrics import mean_squared_error act = np.array([1.1,2,1.7]) pred = np.array([1,1.7,1.5]) mean_squared_error(act, pred, squared = False)
Output :
0.21602468994692867
If the parameter ‘squared‘ is set to True then the function returns MSE value. If set to False, the function returns RMSE value.
3. Mean Absolute Error (MAE)
Mean Absolute Error (MAE) is calculated as the average of the absolute difference between predictions and actual observations. Mathematically we can represent it as follows :
Python implementation for MAE is as follows :
import numpy as np def mean_absolute_error(act, pred): diff = pred - act abs_diff = np.absolute(diff) mean_diff = abs_diff.mean() return mean_diff act = np.array([1.1,2,1.7]) pred = np.array([1,1.7,1.5]) mean_absolute_error(act,pred)
Output :
0.20000000000000004
You can also use mean_absolute_error from sklearn to calculate MAE.
from sklearn.metrics import mean_absolute_error act = np.array([1.1,2,1.7]) pred = np.array([1,1.7,1.5]) mean_absolute_error(act, pred)
Output :
0.20000000000000004
4. Cross-Entropy Loss Function in Python
Cross-Entropy Loss is also known as the Negative Log Likelihood. This is most commonly used for classification problems. A classification problem is one where you classify an example as belonging to one of more than two classes.
Let’s see how to calculate the error in case of a binary classification problem.
Let’s consider a classification problem where the model is trying to classify between a dog and a cat.
The python code for finding the error is given below.
from sklearn.metrics import log_loss log_loss(["Dog", "Cat", "Cat", "Dog"],[[.1, .9], [.9, .1], [.8, .2], [.35, .65]])
Output :
0.21616187468057912
We are using the log_loss method from sklearn.
The first argument in the function call is the list of correct class labels for each input. The second argument is a list of probabilities as predicted by the model.
The probabilities are in the following format :
[P(dog), P(cat)]
Conclusion
This tutorial was about Loss functions in Python. We covered different loss functions for both regression and classification problems. Hope you had fun learning wiht us! | https://www.journaldev.com/46757/loss-functions-in-python | CC-MAIN-2021-17 | refinedweb | 707 | 60.41 |
If you looked at the variables, repetition and commands chapters, you already know a great deal about how to keep your code tidy and organized. Variables can be used to store data, commands to manipulate data, and for-loops to do things repeatedly. As you move on to bigger projects you’ll probably want to learn about classes as well. A class is a programming convention that defines a thing (or object) together with all of the thing’s properties (e.g. variables) and behavior methods (e.g. commands). If you take a look at the source code of PlotDevice libraries you’ll notice that they are full of classes.
For example, a Creature class would consist of all the traits shared by different creatures, such as size, position, speed and direction (properties), and avoid_obstacle(), eat_food() (methods).
We can then use the Creature class to generate many different instances of a creature. For example, an ant is a small and speedy creature that wanders around randomly looking for food, avoiding obstacles by going around them. A dinosaur is also a creature, but bigger and slower. It eats anything crossing its path and avoids obstacles by crashing through them.
As we saw in the Commands chapter, there’s a difference between defining something and using it. This same distinction also applies to classes: you first need to teach PlotDevice about your new object type, then later in the script you can create as many ‘instance’ of the type as you want.
Class definitions have a syntax that looks quite similar to what we’ve seen before. There’s a line starting with the class statement where we set the name for our new class, then an indented block of one or more method definitions. As always, don’t forget the colon that begins the indented block:
class ClassName: ... # method defintions
Every class has a method named __init__() which is executed once when an instance of the class is created. This method sets all the starting values for an object’s properties and defines the parameters accepted when creating new objects.
For example, here’s what a simple definition of a Creature class would look like:
class Creature: def __init__(self, x, y, speed=1.0, size=4): self.x = x self.y = y self.speed = speed self.size = size
As you can see, a creature has x and y properties (its location in space) as well as speed and size properties (which we’ll use to move the creature around). Since speed and size are defined with default values, they are optional when instantiating a new creature.
You create instances of a class by calling the class name as if it were a command, but including the parameters defined in its __init__() method. The one tricky bit is that you should ignore the
self argument, since it will be filled in automatically (see below).
ant = Creature(100, 100, speed=2.0) dinosaur = Creature(200, 250, speed=0.25, size=45)
To change a property value for a creature later on we can simply re-assign it:
ant.speed = 2.5
As with command names, Python has some conventions for naming classes that you should try to follow:
CamelCasenotation.
_
Let’s add a roam() method to the Creature class to move creatures around randomly. A class method looks exactly like a command definition but it always takes self as the first parameter. The self parameter is the current instance of the class. It allows you to access all of an object’s current property values and methods from inside the method.
class Creature: def __init__(self, x, y, speed=1.0, size=4): self.x = x self.y = y self.speed = speed self.size = size self._vx = 0 self._vy = 0 def roam(self): """ Creature changes heading aimlessly. """ v = self.speed self._vx += random(-v, v) self._vy += random(-v, v) self._vx = max(-v, min(self._vx, v)) self._vy = max(-v, min(self._vy, v)) self.x += self._vx self.y += self._vy
So now we have added two new properties to the class, _vx and _vy, which store a creature’s current heading. Both property names start with an underscore because they are for private use inside the class methods (i.e., no one should directly manipulate ant._dx from the outside).
Each time the roam() method is called we add or subtract a random proportion of the creature’s speed from its heading, like a pinwheel twirling around randomly. We also make sure the horizontal and vertical ‘velocities’ don’t exceed the creature’s maximum speed. Otherwise the creature would start going faster and faster which isn’t very realistic. Finally, we update the creature’s position by stepping it toward the new heading.
Now we can create two ants (a small fast one and a big slow one). Since their size and speed differ they will roam in different ways.
ant1 = Creature(300, 300, speed=2.0) ant2 = Creature(300, 300, speed=0.5, size=8) speed(30) def draw(): ant1.roam() ant2.roam() arc(ant1.x, ant1.y, radius=ant1.size) arc(ant2.x, ant2.y, radius=ant2.size)
Now that we know the basics about classes, properties and methods, we can take things a lot further. We can define different classes for different things and have them interact with each other.
If we want to have our little critters avoid obstacles while wandering around, they will need to have some sense of the world around them. Our code will need to keep track of where the obstacles are located, and the creature will need to have some sort of ‘feeler’ so it can detect looming collisions.
A feeler could be the creature’s sight, hearing, or antennae – it’s really just a metaphor for the sensory feedback we’ll be providing it from ‘the world’. For our simple class it can represented as an
x/
y ‘test’ point just in front of the creature’s current heading. If this point falls inside an obstacle, it’s time for the creature to change its direction.
The creature is heading into an obstacle – it can ‘sense’ the obstacle with its feeler and will need to adjust its bearing clockwise to avoid colliding with it.
We’ll start out by creating a World class which we can use to store a list of obstacles. Later on we can add all sorts of other stuff to the world (e.g. weather methods, colony location properties, etc.). Note that we’re defining it as a class even though we’ll only ever create one
World object in any of our simulations.
class World: def __init__(self): self.obstacles = []
Next we’ll define an Obstacle class. In our simulation we’ll be creating a number of
Obstacle objects and storing them in a shared
World object. The
Obstacle itself doesn’t need to know about the world-at-large (since all it ‘does’ is occupy space), so it only keeps track of its personal location and size:
class Obstacle: def __init__(self, x, y, radius): self.x = x self.y = y self.radius = radius
To know if a creature is going to run into an obstacle we need to know if the tip of its feeler intersects with an obstacle. Obviously we’re going to need some math to calculate the coordinates of the tip of the feeler. Luckily everything we need is already described in the tutorial on geometry. We could use the methods provided by Point objects, but instead let’s define some equivalent geometry commands that work with plain-old numbers:
from math import degrees, atan2 from math import sqrt, pow from math import radians, sin, cos def angle(x0, y0, x1, y1): """Returns the angle between two points.""" return degrees( atan2(y1-y0, x1-x0) ) def distance(x0, y0, x1, y1): """Returns the distance between two points.""" return sqrt(pow(x1-x0, 2) + pow(y1-y0, 2)) def coordinates(x0, y0, distance, angle): """Returns the coordinates of given distance and angle from a point.""" return (x0 + cos(radians(angle)) * distance, y0 + sin(radians(angle)) * distance) def reflect(x0, y0, x1, y1, d=1.0, a=180): """Returns the coordinates of x1/y1 reflected through x0/y0""" d *= distance(x0, y0, x1, y1) a += angle(x0, y0, x1, y1) x, y = coordinates(x0, y0, d, a) return x, y
With the above coordinates() command we can:
So first of all we’ll need to add a new feeler_length property to the Creature class, and a new heading() method that calculates the angle between a creature’s current position and its next position. We also add a new world property to the Creature class. That way each creature has access to all the obstacles in the world. We loop through the list of the world’s obstacles in the avoid_obstacles() method.
The avoid_obstacles() method will do the following:
class Creature: def __init__(self, world, x, y, speed=1.0, size=4): self.x = x self.y = y self.speed = speed self.size = size self.world = world self.feeler_length = 25 self._vx = 0 self._vy = 0 def heading(self): """ Returns the creature's heading as angle in degrees. """ return angle(self.x, self.y, self.x+self._vx, self.y+self._vy) def avoid_obstacles(self, m=0.4, perimeter=4): # Find out where the creature is going. a = self.heading() for obstacle in self.world.obstacles: # Calculate the distance between the creature and the obstacle. d = distance(self.x, self.y, obstacle.x, obstacle.y) # Respond faster if the creature is very close to an obstacle. if d - obstacle.radius < perimeter: m *= 10 # Check if the tip of the feeler falls inside the obstacle. # This is never true if the feeler length # is smaller than the distance to the obstable. if d - obstacle.radius <= self.feeler_length: tip_x, tip_y = coordinates(self.x, self.y, d, a) if distance(obstacle.x, obstacle.y, tip_x, tip_y) < obstacle.radius: # Nudge the creature away from the obstacle. m *= self.speed if tip_x < obstacle.x: self._vx -= random(m) if tip_y < obstacle.y: self._vy -= random(m) if tip_x > obstacle.x: self._vx += random(m) if tip_y > obstacle.y: self._vy += random(m) if d - obstacle.radius < perimeter: return def roam(self): """ Creature changes heading aimlessly. With its feeler it will scan for obstacles and steer away. """ self.avoid_obstacles() v = self.speed self._vx += random(-v, v) self._vy += random(-v, v) self._vx = max(-v, min(self._vx, v)) self._vy = max(-v, min(self._vy, v)) self.x += self._vx self.y += self._vy
Play movie | view source code
Another easy example of classes is the Tendrils item in the gallery. | https://plotdevice.io/tut/Classes | CC-MAIN-2017-39 | refinedweb | 1,788 | 67.55 |
Seeing this issue again JAX-WS RI 2.2.5 - JAXWS 2.2 wsgen does not generate the correct wsdl and schema files
Hello
I am currently using Ant tasks for wsgen and wsimport using the latest JAX-WS RI 2.2.5 libraries. I have an annotated endpoint service class in one package and have various type classes located in another package. Normally, as I have seen with previous releases 2.1.X, I would expect to see the WSDL file created along with the necessary XSD files imported where the XSDs have the respective namespaces correlating to the package names (in reverse). The "type" classes all have proper JAXB annotations on them such as:
package com.acme.book.services.dvo;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import javax.xml.bind.annotation.XmlType;
@XmlAccessorType(XmlAccessType.FIELD)
@XmlType(name = "Book", namespace="")
public class Book {
<code removed> }
package com.acme.book.webservices;
import java.util.ArrayList;
import java.util.List;
import javax.jws.HandlerChain;
import javax.jws.WebMethod;
import javax.jws.WebParam;
import javax.jws.WebService;
import org.apache.log4j.Logger;
import com.acme.book.services.BookOrderManagerService;
import com.acme.book.services.dvo.Book;
import com.acme.book.services.dvo.BookOrder;
@WebService(name = "BookOrderManagerService", serviceName = "BookOrderManagerService", portName = "BookOrderManagerServicePort")
@HandlerChain(file = "handler-chain.xml")
public class BookOrderManagerWS implements BookOrderManagerService {
<code removed> }
This has worked fine before... but now using the newer JAR files (2.2.5), all I am getting is the single WSDL file (expected) but with ONE xsd file (NOT expected) that has everything dumped into ONE namespace ....
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<xs:schema
...the namespace of the service endpoint only. This appears to describe what is found here now: (supposedly fixed in 2.2.2 per the link below)
Has this been fixed for the Ant tasks as well, or has somehow the bug shown back up again? Nothing I try can seem to get around this now.
Thanks
For the record - the problem appears to occur w/ versions JAX-WS RI 2.2.1, 2.2.3 and 2.2.5 (tried all 3 versions). Then, I just installed the last available version of JAX-WS RI 2.1.x - appears to be 2.1.7 found at based on the link from here:
I re-ran the Ant build script w/ the wsgen taskdef set up to resolve from the 2.1.7 location - the problem does NOT occur IN THIS CASE. The issue appears to have been introduced in 2.2.x. Now I get the WSDL with the 3 EXPECTED XSD files instead of only ONE. I would presume that this would be considered a severe defect in the 2.2.x version of the libraries. I have not tried running as a direct "command line" so perhaps it was fixed there for folks who use wsgen that way - but clearly it appears the Wsgen Ant task DOES NOT work correctly.
submitted a defect since reverting to 2.1.7 resolved problem, indicating defect was somehow introduced in 2.2.x | https://www.java.net/forum/topic/glassfish/metro-and-jaxb/seeing-issue-again-jax-ws-ri-225-jaxws-22-wsgen-does-not-generate-correct-wsdl-and-schema-f | CC-MAIN-2015-18 | refinedweb | 517 | 53.37 |
Episode 39 · January 15, 2015
Learn how to refactor a complex controller with a bunch of methods into a much cleaner set of code
In this episode, we're going to talk about refactoring a controller that my friend gave me. I stripped out a lot of code that we're not going to cover in this episode, but they had a bunch of these methods inside the controller that sent text messages, so this interacts with the Twilio API, and talks to the user that's passed in the SMS phone number. They have basically all of these methods that do the same things, and they all initialize the Twilio REST client, they set the phone number and they set the phone number it's sending to. Then, each of them has a different body of text that they're sending. I thought it would be nice to take a look at this and see how we could refactor it.
The first thing that is easiest to do is to pull out a method called client, and the way we're going to do that is we're going to take this and the reason why we call the client is because we called the Twilio REST client, "client". We do that, we can get rid of this variable here, and return the Twilio REST client, and that means that each and every one of these methods doesn't need these three lines of code anymore, and by removing that line and naming the method the same as this local variable, we can simply refactor that out, and then not have to do anything and not change any of these methods except for deleting those lines. These lines will function exactly the same as they did before, and that's pretty cool. We could do the same thing with
from and
to, but especially in the to case, we already have the send to variables, so why don't we just update that and get rid of this local variable, because we only use it the one time in each of these methods, so we can delete these, and go through each of these methods, and shorten them just a bit, you can see that originally, the developer was testing these and we got something working, so let's just copy paste this because the first time worked and we can trust that the rest of these is going to work. That's not an unreasonable mindset to have when you're developing something, it's like a tool like you've never used before.
All of these use the exact same phone number to send from, so let's just actually hard code it here, because we're never going to need to change that, and if we do, we can use an environment variable or something in the rails application config or something. WE can change this and leave it that way, and if we ever do need to update it, we can do that accordingly. I'm going to go through and update all of this as well.
Now we've got this shortened quite a bit, and you can start to see that these methods are a lot more deliberate now, so each of them simply says like: Hey, let's create a text message from this number to this number and the body, and because of the way that this is visually, you can also see that there's really only one method call inside each of these methods, even though you see like hash variables, the keys and the values. You can tell that they're part of this method call because of how they're tapped in. That's pretty nifty and that helps us a lot. One thing we can do with these two is the
account_sid variable and the
auth_token variable are unnecessary, they don't really provide us any benefit, because they're only used once, and we might as well just put them there. If we delete that, we can save another couple lines of code, and add a little bit more clarity not adding those variables. We can also think about making a Send SMS method
def send_sms(to_phone_number, body) :from => "+15128618455", :to => to_phone_number, :body => body end
Having the phone number and declaring that it's the phone number you're sending it to makes it a whole lot clearer so you can see it at a glance. Now we can update each and every one of these methods to say
def get_first_nm(sendto) send_sms sendto, "Welcome!" end
Now that we've done that, you can see that all of these lines are becoming a lot clearer. You can glance at them and you can see we're going to send an SMS, we don't really know that that means, we want to probably update all of the sendtos to to phone number, but we do know at a glance like what message we're going to send them. Let's actually just grab all of these change sendto to to_phone_number, and then, now we can see send SMS to this phone number, and this is going to be the message we send. That helps a whole lot. We can also see this default response does a little bit of database logic, it pulls a response out of the database, and then sends an SMS to that phone number with that custom body, and we can refactor that a little bit just by cleaning this up and then we have this tabbed over little piece of logic here, and this is going to help us by saying: Ok, we can see that the body is set just one time, but actually, why don't we take this a step further. Why do we even need this body variable? Maybe we do, maybe this body variable is rendered inside of a response somewhere, but if it's not, why don't we just make a method called default_response. These clearly don't use any local variables, because we didn't set any before this line, so we can actually just take this out and paste it into this own method. If we do that, then we know what the default response is, and we can get rid of the body variable so long as we're not using it in the HTML or XML response or something like that. This is the way that I definitely prefer to do this because then we can set our text here and
send_sms to_phone_number and obviously it's going to be the default response. We don't have to care here how that works, we just have to know that as long as the default response gives us what we want, then we're good, we don't have to worry about that whatsoever. That's cool.
Now we've refactored this quite a bit, but you can tell that there's a whole lot of things that are still almost unnecessary you might say, like we've got the replies and the controller knows how to talk to Twilio and it knows how to send an SMS and all that, like it seems to make more sense if we actually make an SMS class that controls how those works, so if we think of that, we can make an SMS class and we can say: Ok, this SMS class is going to be the one that knows how to send messages, and we'll use that instead of making this little method here, but the way we did this, and refactoring into little helper methods, we can start to see the areas of responsibility. You can see that these two methods are extremely related to how to send an SMS. The rest of these are just simply sending one. They don't really care, they know what text to send and all of that, so we should actually pull these out into their own object.
I've created an SMS class here that we can pull that into. Let's just grab the client and send_sms method and paste them in here. Now we have instance methods that allow us to send that to a phone number, but why don't we also create an initialize method that accepts the two phone numbers so we can save that, and then here instead we'll send to that phone number. This allows us to update our code here with the idea of: Hey, there's an SMS thing, and we would want to send it to the phoen number, and then we can rename to send instead of send_sms. This allows us to create a new SMS, or the idea of: Hey, we're going to send an SMS to this person, and then the send method will actually send the text to them. This allows you to update these and they read a lot better, or at least I think they read better. This way, you have the concept of sending an SMS. This doesn't really send an SMS. Twilio is the one that really does it, but your application has the concept of sending the SMS, so this also allows you to do some neat things. Maybe your introduction is going to be a little bit longer than one message, so in this case, what if you had "Welcome to BringUp! Let's begin" And that was just a single message, and then what if you had another SMS that you wanted to send to the same phone number? You could simply do this, and this SMS class would allow you to send two messages to the same phone number because it saves the phone number and you could use this sort of on a longer basis. You could reuse the SMS object multiple times to send them multiple messages.
I'm going to take this back a notch and we're going to go back to calling send here on the single message, and then we now can see that all these methods are interesting, right? I mean they all have a name, and then they all have an associated string that you want to send. You can imagine that there is some piece in the controller that says:
if "first name" get_first_name else "last name" get_last_nm end
You can imagine there's a piece of code in the controller somewhere that I didn't include that calls these methods. What's neat about the way that we've refactored this down is that you can see we're always creating a new SMS, but the message is going to be different each time, but we also have the name. Somewhere we know that we won't ask for the first name, so imagine that there's an
@reply.state and that state could equal "first nm". If we had some hash of keys and values and we had "first_nm" and the message we want to send to people in that state is this, and then maybe we have "last_nm", and the message we want to send them is: "What is your last name?" and we can update this to say: "What is your first name?", and then we could start pulling all of these out, and we can have the child name, and the questions at those states in time. The way we could do that is we could either create a questions constant and set it equal to that, we could have a method that returns this or whatever. I'm going to do a constant because they should always stay the same, and then we could take these methods and we could actually get rid of them almost completely. We could have a method called send_next_sms and this will look at the reply state, and as we mentioned before, this would be like first_nm and then, here we could retrieve that question and we can just say
def send_next_sms QUESTIONS[@reply.state] end
This would return us this string, we would get that back, and then we could just simply send the SMS, so we could simply say
def send_next_sms(to_phone_number) SMS.new(to_phone_number).send QUESTIONS[@reply.state] end
That would allow us to delete all of these methods, so as long as we moved all of these properly into this, so we could also do like-- These methods don't have the same names but it doesn't matter, as long as we know that let's make one called "sign_off", and that's going to be the state that we want to send to the user. Then we can name these accordingly so long as we remember this, then we can look up the message all the time, and we can delete all of these. The one thing we can't delete is the default response. What if this
reply state doesn't exist in there? So what if the state is something wrong? Maybe there is nil or something like that, in that case, we can switch this square brackets with the fetch, and we can send the default response instead. Fetch will look for the state, and it will look up the question with the name of state, so it will try and do that, but if there's nil, and there's no nil equals whatever, then it will fall back to the default response. We can even delete the default response by doing that, and we can add states into our system at any time just by simply adding a line in here and saying
"relationship" => "What is your relationship?" You can simply refactor this down to a hard coded list of names and questions or messages, and then take all the duplication of each of those methods and have one that is smart about how it works and delegates directly to the proper message. This way, we can refactor our code down really cleanly so that there's really nothing that can go wrong with this. If you have an invalid state, the user will get the default response, and an SMS will always be sent. We've taken out all of the if statements, except for this one, and this one can't really be removed, because we have to check to see if there's a parent and it has a response, if it doesn't then we can't use it. In this case, we're always going to need a little bit of logic. One thing we could do is we could combine this into something but it's not much to worry about. What we've done is we've removed these separate methods and all the logic around that, and simply replaced it with one that delegates to this hash of questions.
I hope that's helpful for you, it's basically simply taking these lines of code, splitting them up into methods inside the same file, and then rethinking about what concepts or what groupings do these methods have? Definitely sending an SMS and connecting with Twilio belongs and something, but the rest of these actually belong together. There's something that these questions and sending the next question all belong to. We can even do
def next_question QUESTIONS.fetch(@reply.state, default_response) end
Then we can always send the next question. These probably belong is some sort of model, maybe these are questions for your user, and so this way you could always look up the right next question for the user and so on. I actually prefer this extra method here so that we can write tests to make sure that hey, this user of the replies in the current state, then we can look up that, and then we can write our tests to make sure that: If you're on the first name we should get this message and so on. Then, we could write a test just simply to make sure that SMS gets created accordingly and so on. For testing purposes, we've broken things down into very very very short things, and removed any of the duplication, as well as any of the if statements in there, because now, the hash defines all of the logic of finding which question goes to what. We don't need any if statements, and that makes this code very very clean, simple and testable | https://gorails.com/episodes/refactoring-controller-methods?autoplay=1 | CC-MAIN-2017-47 | refinedweb | 2,759 | 73.95 |
DNS's distributed database is indexed by domain names. Each domain name is essentially just a path in a large inverted tree, called the domain namespace. The tree's hierarchical structure, shown in Figure 2-1, is similar to the structure of the Windows filesystem. The tree has a single root at the top.[1] In the Windows filesystem, this is called the root directory and is represented by a backslash (\ ). DNS simply calls it "the root." Like a filesystem, DNS's tree can branch any number of ways at each intersection point, or node. The depth of the tree is limited to 127 levels (a limit you're not likely to reach).
[1] Clearly this is a computer scientist's tree, not a botanist's. name need to be unique only among the children, not among all the nodes in the tree. The same restriction applies to the Windows filesystem: you can't give two sibling directories or two files in the same directory the same name. As illustrated in Figure 2-2, just as you can't have two hobbes.pa.ca.us nodes in the namespace, you can't have two \Temp directories. You can, however, have both a hobbes.pa.ca.us node and a hobbes.lg.ca.us node, as you can have both a \Temp directory and a \Windows\Temp directory.
A domain is simply a subtree of the domain namespace. The domain name of a domain is the same as the domain name of the node at the very top of the domain. So, for example, the top of the purdue.edu domain is a node named purdue.edu, as shown in Figure 2-3.
Likewise, in a filesystem, at the top of the \Program Files directory you'd expect to find a node called \Program Files, as shown in Figure 2-4., as shown in Figure 2-5.
So in the abstract, a domain is just a subtree of the domain namespace. 10 different hosts, each of them on a different network and perhaps even in a different country, all in the same domain.
One note of caution: don't confuse domains in DNS with domains in namespace to find data in other NIS domains. NT domains, which provide account-management and security services, also don't have any relationship to DNS domains. Active Directory domains, however, are closely related to DNS. We discuss that relationship in Chapter 8.
Domain names at the leaves of the tree generally represent individual hosts, and they may point to network addresses, hardware information, and mail-routing information. Domain names in the interior of the tree can name a host and point to information about the domain; they aren't restricted to one or the other. Interior domain names can represent both the domain they correspond to and a particular host on the network. For example, hp.com is both the name of the Hewlett-Packard Company's domain and a domain name that refers to the hosts that run).
A domain may have several subtrees of its own, called subdomains.[2]
[2] The terms "domain" and "subdomain" are often used interchangeably, or nearly so, in DNS documentation. Here, we use "subdomain" only as a relative term: a domain is a subdomain of another domain if the root of the subdomain is within the domain.
A simple way of determining if. It's also namespace:
A top-level domain is a child of the root.
A first-level domain is a child of the root (a top-level domain).
A second-level domain is a child of a first-level domain, and so on.
The data associated with domain names.) In this book, we concentrate on the internet class.
Within a class, records come in several types, which correspond to the different varieties of data that may be stored in the domain namespace. Different classes may define different record types, though some types are common to more than one class. For example, almost every class defines an address type. Each record type in a given class defines a particular record syntax to which all resource records of that class and type must adhere. (For details on resource record types and their syntaxes, see Appendix A.)
If this information seems sketchy, don't worry?we'll cover the records in the internet class in more detail later. The common records are described in Chapter 4, and a more comprehensive list is included as part of Appendix A. | http://etutorials.org/Server+Administration/dns+windows+server/Chapter+2.+How+Does+DNS+Work/2.1+The+Domain+Namespace/ | CC-MAIN-2018-30 | refinedweb | 750 | 65.93 |
How do I open a Form using a Combobox In C++?
How do I open a Form using a combobox? If I have 3 Values such as apples, oranges and I want to open a form if the user choses apples and another form if the user choses oranges.
private: System::Void button1_Click(System::Object^ sender, System::EventArgs^ e) { FrenchFries ^ newfrm = gcnew FrenchFries(); if (comboBox1::Text = "French Fries") newfrm->Show(); else Form::Close();
This is the error that I get.
error C2451: conditional expression of type 'void' is illegal
error C2653: 'comboBox1' : is not a class or namespace name
8 hours ago
- 4 days left to answer. | https://www.daniweb.com/programming/software-development/threads/429645/how-do-i-open-a-form-using-a-combobox | CC-MAIN-2017-34 | refinedweb | 107 | 68.1 |
View Debugging in Xcode 6
When you’re developing an app, sometimes there may be a bug with your views or auto layout constraints that isn’t easy to find just by looking through your code.
It pays to know the technique of view debugging – and this has never been easier with the advent of Xcode 6.
Instead of printing frames to the console and trying to visualize layouts in your head, you’re now able to inspect an entire view hierarchy visually – right from within Xcode. What an incredible time to be alive!
This tutorial will take you through all the different options that are at your disposal. So, are you ready to write some code? That’s too bad, because you won’t. :]
Instead, you’ll inspect the view hierarchy of an open source library to better understand how it was written — without even looking at any code.
Getting Started
The library you’ll use in this tutorial is JSQMessagesViewController, written by Jesse Squires. The UI for this library should look very familiar, as Jesse built it to look similar to the Messages app.
To get started, head over to the GitHub project page, download the source code, and unzip into a directory.
Note: The library uses CocoaPods to manage its dependencies on other libraries. If you’re unfamiliar with how CocoaPods works, please take a look at this CocoaPods tutorial here on the site before proceeding.
Next, navigate to the unzipped project directory in Terminal and run
pod install to install the required dependencies. Then, open JSQMessages.xcworkspace and build and run the app on the iPhone 5s simulator. (You can use any simulator, but dimensions in this tutorial are based on a 4-inch display, so choosing the same will make it easier to follow.)
Tap on Push via storyboard and you’ll notice you’re now in a text messaging thread with Steve Jobs and Tim Cook. (This may cause a flutter in your heart and make you question your perception of reality, but it’s not really them.) This is the view you’ll inspect.
Go back to Xcode and click on the Debug View Hierarchy button in the Debug bar. Alternatively, go to Debug\View Debugging\Capture View Hierarchy.
Xcode is now interrupting your app and handing the reigns to the debugger, just as if you had paused your app with the pause button on the Debug bar.
In addition, Xcode replaces the code editor with a canvas in which it draws the entire view hierarchy of the key window of your app, including thin lines (called wireframes) that indicate the boundaries of every view.
You may know that when you add a subview to a view hierarchy, you’re adding a layer on top of the current stack of views. Because most views don’t overlap, it looks like all views are just part of one big layer when you run your app. The screen you’re currently looking at is pretty close to that, but with a bunch of extra lines.
So how is that useful? Well, right now you’re seeing a visual of the view stack from overhead, but what if you could visualize where the layers fall within the stack? Click and drag in the canvas, and you’ll see that instead of a flat 2D view, you’re actually interacting with a 3D model of your view hierarchy.
You can view the hierarchy from the side, top, bottom, a corner and even from the back!
Note: It’s possible your canvas isn’t showing the same views as this tutorial assumes. To make sure you’re on the same page, press cmd + 6 to get to the Debug navigator.
At the bottom of the pane, you’ll see two buttons on the left. Deselect both of these buttons, as seen in this image. If not, some views will be hidden on the canvas.
Exploring the View Hierarchy
The most natural and common perspective of this 3D model is to look at it from the left side — you’ll see why a little later in this tutorial — so manipulate it to get a point of view like the one below.
This gives your view hierarchy some perspective that’s very useful if you want to visualize how it’s built. However, there seems to be many empty views at the “bottom” (on the left) of the stack. What’s that all about?
Click on the left-most view (i.e. the one in the very back) and you’ll see that Xcode highlights it to indicate your selection. You’ll also see the Jump Bar (just above the Canvas) update to show a
UIWindow as the last item — that last item will always reflect the currently selected view and its class type.
Since this app uses only one window, it’s safe to assume the
UIWindow at the start of the Jump Bar is the app’s key window, or in other words, the
window property on
AppDelegate.
OK, it’s good to know how to find the window, but it’s unlikely that you’ll need to inspect that. What about the next view? In the canvas, click on the view to the right (i.e. on top) of the window and look at the Jump Bar again.
UILayoutContainerView. That’s not even a public class!
From there, the view hierarchy looks like this:
UINavigationTransitionView: The container view in which navigation controller transitions happen
UIViewControllerWrapperView: A wrapper view that contains the view controller’s
viewproperty
UIView: The top-level view of a view controller (the same as a view controller’s
viewproperty)
JSQMessagesCollectionView: The collection view used by this project to display all messages
Focusing on Views of Interest
In debugging this particular view hierarchy, the first four views (starting with the window) are really just visual noise. They don’t bring anything meaningful; they distract in trying to understand what else is going on in this view hierarchy. It would sure be nice if you could filter out that visual noise…
And you can. :] Look at the double-thumbed slider in the bottom right of the canvas. By default, the thumbs are all the way on the left and right side of the slider.
Drag the left thumb slowly to the right a little bit; you’ll see the wireframe that represents the app’s window disappear from the canvas. If you drag a little further, the
UINavigationTransitionView disappears as well.
Drag the left thumb as far as needed to hide all the parent views of
JSQMessagesCollectionView. Your canvas should now look similar to this:
On the right side, you might notice the navigation bar is not just distracting, but it’s actually laid out on top of the collection view, making it hard to see what’s going on underneath. Fortunately, you can hide it.
Because you’re focusing on a smaller area of the screen with many smaller views that comprise the navigation bar, it’s a good idea to zoom in on the nav bar so you can see what exactly you’re doing.
Use the zoom control buttons, which are in a group of three buttons centered at the bottom of the canvas:
As you would expect, the + button zooms in, the – zooms out, and the = button resets the zoom to the normal zoom level. Zoom in to get a good visual of the navigation bar.
Note: If you use a trackpad, pinch gestures will also zoom. A trackpad is also useful to move around in the canvas if all parts of the screen can’t be shown at once because you zoom in really far.
You can also zoom with option-mousewheel.
The extra detail you get by zooming in on the toolbar is nice, but the views still slightly overlap, so it’s not easy to tell which view is which.
To solve that problem, use the spacing slider in the bottom left corner of the canvas. The further you drag the thumb of the spacing slider to the right, the more spacing Xcode shows between different views.
In this particular case, move the slider to the right as much as needed to avoid overlapping views in the toolbar. You might have to play around with the perspective by click-dragging on the canvas to get the desired result.
The 3D model is now perfectly manipulated so you can easily hide the navigation bar.
In the slider on the right (the one that hides views), drag the right thumb slowly to the left, up to the
UINavigationBar. Remember that you can use the Jump Bar to identify each view’s class by selecting the topmost layer as you go. You’ll see the navigation items disappear first, then the buttons that contain them, followed by a few private views, and lastly, the navigation bar.
Note: If you rotate the canvas to look at the 3D view hierarchy model with the top layer on the left, the slider’s left thumb still removes views from the bottom of the stack, which is now on the right. Similarly, the right thumb removes views from the left.
Moving a slider from the left to the right and having views disappear from the right to the left (and vice versa) is counterintuitive, so that’s why looking at the model with the top layer on the right is the most natural perspective.
Unfortunately, hiding the nav bar (with the root view of
_UIBackdropView) view also causes the toolbar items’ content at the bottom of the screen to disappear. To see this, you may need to adjust the zoom level or move down the canvas.
You want to see the toolbar items as they are an important part of the screen, so only hide the views up until (but not including) the
_UIBackdropView. The navbar stack should look something like the following once you’re done.
More View Options
Now that irrelevant views are hidden, it’s time to take a look at the screen from the front again. You can try to drag the model back into place, but sometimes it’s difficult to get it just right. Fortunately, there is a simpler way.
Look at the group of four buttons to the left of the zoom buttons. The third button from the left is the ResetViewing Area button. It undoes rotations and gives you the front perspective of the view hierarchy, just like in the simulator or on a device.
Your canvas should look similar to the following at this point:
You probably noticed that what you see in the debugger is still not exactly what you see when the app actually runs.
First of all, you still have wireframes around every single view; they allow you to see transparent views or views without any content, but if you don’t need the detail they can make things pretty noisy.
You can turn this off with the View Mode button — the one to the right of the Reset Viewing Area button. When you click the view mode button, you can specify if you want to see the views’ wireframes, their contents or both.
A wireframes-only view is particularly useful if you’re mostly interested in positioning, and not so much about what a view looks like. Showing only the views’ contents is useful when you’re trying to debug a view’s appearance.
To reduce some of the clutter the wireframes cause in this case (especially near the navigation bar and toolbar), change the view mode to Contents to remove all the wireframes and leaves you with the core of the app.
Next, a couple of things are missing from the current view. When you run the app, you’ll see labels above the text bubbles that indicate the sender’s name or the message’s timestamp, as well as an image of the Golden Gate Bridge in the last bubble. But the debugger isn’t showing those labels and that image!
To solve this, look at the first button in the center row of buttons on the canvas. This is a toggle to show or hide clipped views. These are views that have their
clipsToBounds property set to
YES.
That is exactly what is happening with these labels, presumably because long names and dates should not extend beyond the bounds of their labels. The same applies to the image, which uses a corner radius and clipping to produce the rounded corners. Click on the toggle and you’ll notice these views now show up in Xcode.
Note: You may notice wireframes around the newly visible items. If that’s the case, toggle wireframes on and then off with the View Mode button you used previously. That should resolve the issue.
And there you have it: a near-perfect replica of your view hierarchy right within Xcode.
Inspecting Views
Now that the important parts are easily accessible, it’s time to look at layouts for these different views.
You already knew the collection view is what makes all these views come together, but wouldn’t it be great if you could just have an overview of the different elements that are at play here? Good news – you can!
Press cmd + 6 to go to the Debug navigator. Since this is a debugging session just like any other, the Debug navigator provides contextual information about the current session. For view debugging, this means Xcode provides a view tree of all the views for all windows. Expand the tree in the Debug navigator to look like this:
Note: At the bottom of the Debug navigator, you’ll see options that give a little bit of control over what kinds of items display in this tree. Apple’s documentation claims the button on the left filters private elements out of system view implementations, though this button does not seem to work as of Xcode 6.2.
The button to the right hides views that have their
hidden properties set to
YES, and the search bar only displays views and constraints that match the search criteria.
For the purposes of this tutorial, you should have both buttons deselected and no search filter applied.
This is a good point to start exploring a little bit. Expand the last
JSQMessagesCollectionViewCellOutgoing. It only has one subview: a
UIView.
If you’ve worked with collection views before, you know this makes sense because any
UICollectionViewCell has a
contentView property that contains the cell’s contents.
Click — don’t expand — on that
UIView in the Debug navigator and you’ll see that Xcode has now highlighted it in the canvas so you know exactly where it is on the screen.
To really understand how iOS positions that cell, open the Size Inspector with cmd + option + 4. The top part of the visualizes the view’s bounds, position and anchor point.
However, the really interesting part is the list of Auto Layout constraints that apply to this view. You can immediately tell that the cell’s content view is 312 points wide, 170 points tall and centered in its parent. The containing cell is also 312 by 170 points, so you know the content view takes up the entire cell.
Below are constraints colored in gray that indicate those are constraints that dictate relationships between this view and its subviews.
To get more details about a particular constraint, first expand that view in the view tree and then the Constraints item. You’ll also see the same constraints you saw in the Size navigator listed here.
Click on the first constraint (for me it’s a constraint on
self.midX) and switch to the Object inspector by pressing cmd + option + 3. You’ll see a constraint overview with the items, multiplier, constant and priority. This is much like the summary you see in Interface Builder when you edit a constraint.
In addition to sizing and constraint information, you can see other details related to the display of a particular view in the object inspector. Back in the Debug navigator, expand the
UIView in the tree and you’ll see there are three
JSQMessageLabels and two
UIViews under it. Select the first
JSQMessageLabel (the one with the timestamp), and open the Object inspector.
The first section indicates the object’s class name and memory address, and the second shows values for various public properties of the object that pertain to its appearance.
Here you can see the label’s text color is 0.67 gray with no alpha, and its font size is 12pt.
Other classes have useful information specific to how they’re visualized as well. Back in the Debug navigator, expand the second
UIView under the cell’s root
UIView and you’ll see a
UIImageView.
Select the image view from the tree and check out the Object inspector.
Here you’re looking at the view that shows the user’s avatar — in this case, the author’s initials, JSQ. You can see the normal image, conveniently labeled Image, as well as the darker image, labeled Highlighted, that shows when the user taps on the cell.
The other two instances of
JSQMessageLabel in this cell’s root view don’t currently have text, but they are used for the sender’s name in incoming messages and an error message when sending an outgoing message fails.
And that’s how easy it is to do view debugging in Xcode! To continue running the app, just click the Continue button on the Debug bar, or go to Debug\Continue, just like you would if you were in a normal debug session.
Live Modifications
Now that you know the basics of view debugging in Xcode 6, it’s time to put that knowledge to work with a small exercise: For the collection view you’ve been exploring throughout the tutorial, make its vertical scroll indicator red by only using the debugger.
Here are two tips to get you started:
- Since view debugging is just like any other debugging session, you can use
exprand other commands in the console, but keep in mind changes won’t appear until you resume execution. For more information on these commands, take a look at this debugging apps in Xcode tutorial.
- Because pointers in Objective-C are really just memory addresses, you’re simply messaging a memory address when you message an object. This works in the debugger as well, so a command like
po 0x123456abcdefwould print the description of the object at that memory address.
Give it some solid attempts before you break down and look at the solution below!
Congratulations, you now know the basics of view debugging with Xcode 6!
Old School Debugging
Live view debugging made debugging views in Xcode 6 a lot easier, but that doesn’t mean that your favorite old school tricks are now useless. In fact, iOS 8 introduces a very welcome addition to the family of view debugging tricks:
_printHierarchy.
Printing the View Controller Hierarchy
_printHierarchy is a private method on
UIViewController that you can use to print the view controller hierarchy to the console. Build and Run, select Push via storyboard and then hit the pause button in the Debug bar.
Now type this in the console and press return:
po [[[[UIApplication sharedApplication] keyWindow] rootViewController] _printHierarchy]
You’ll get something very similar to this:
<UINavigationController 0x7fdf539216c0>, state: appeared, view: <UILayoutContainerView 0x7fdf51e33bc0> | <TableViewController 0x7fdf53921f10>, state: disappeared, view: <UITableView 0x7fdf5283fc00> not in the window | <DemoMessagesViewController 0x7fdf51d7d520>, state: appeared, view: <UIView 0x7fdf53c0b990>
This tells you that there is a
UINavigationController whose first view controller is a
TableViewController — the one where you chose how to push the controller. The second view controller is the
DemoMessagesViewController, or the view controller you have been debugging.
It doesn’t seem too exciting in this particular example, but if you have several child view controllers within a navigation controller, and a tab bar controller in a popover in a modal view controller, (I’m not proud of some of my apps’ UI…) it can be immensely useful for figuring out exactly how the view controller hierarchy works.
Printing the View Hierarchy
If you’re not a very visual person and prefer a textual overview of a view hierarchy, you can always use the age-old, and also private,
recursiveDescription on
UIView. This prints a view hierarchy very similar to the view controller hierarchy as demonstrated above.
Open Views\JSQMessagesCollectionViewCellOutgoing.m and add a breakpoint in
awakeFromNib.
Build and Run, then select Push via Storyboard. The debugger should break as a
JSQMessagesCollectionViewCellOutgoing is loaded. Now type the following into the console:
po [self.contentView recursiveDescription]
This will print the hierarchy of the
JSQMessagesCollectionViewCellOutgoing’s
contentView, which will look something like this:
<UIView: 0x7fde6c475de0; frame = (0 0; 312 170); gestureRecognizers = <NSArray: 0x7fde6c484fe0>; layer = <CALayer: 0x7fde6c474750>> | <JSQMessagesLabel: 0x7fde6c475eb0; baseClass = UILabel; frame = (0 0; 312 20); text = 'Today 10:58 PM'; clipsToBounds = YES; opaque = NO; autoresize = RM+BM; userInteractionEnabled = NO; layer = <_UILabelLayer: 0x7fde6c476030>> | <JSQMessagesLabel: 0x7fde6c476400; baseClass = UILabel; frame = (0 20; 312 0); clipsToBounds = YES; opaque = NO; autoresize = RM+BM; userInteractionEnabled = NO; layer = <_UILabelLayer: 0x7fde6c476580>> | <UIView: 0x7fde6c476b50; frame = (70 20; 210 150); autoresize = RM+BM; layer = <CALayer: 0x7fde6c474dd0>> | | <UIImageView: 0x7fde6c482880; frame = (0 0; 210 150); opaque = NO; userInteractionEnabled = NO; layer = <CALayer: 0x7fde6c476ae0>> - (null) | <UIView: 0x7fde6c482da0; frame = (282 140; 30 30); autoresize = RM+BM; layer = <CALayer: 0x7fde6c482d00>> | | <UIImageView: 0x7fde6c482e70; frame = (0 0; 30 30); opaque = NO; autoresize = RM+BM; userInteractionEnabled = NO; layer = <CALayer: 0x7fde6c482f70>> - (null) | <JSQMessagesLabel: 0x7fde6c483390; baseClass = UILabel; frame = (0 170; 312 0); clipsToBounds = YES; opaque = NO; autoresize = RM+BM; userInteractionEnabled = NO; layer = <_UILabelLayer: 0x7fde6c483510>>
It’s rudimentary, but can be helpful when you want to debug a view hierarchy pre iOS 8.
Using debugQuickLookObject
Lastly, Xcode 5.1 introduced a feature called Debug Quick Look. This feature is most useful when you’re already debugging and aren’t wondering what an object looks like at a certain point in your code.
Your custom class can implement the method
debugQuickLookObject and return anything that is visually presentable by Xcode. Then, when you’re debugging and you have an object you want to inspect, you can use quick look and Xcode will show you a visual representation of that object.
For example,
NSURL’s implementation of
debugQuickLookObject returns a
UIWebView with that URL so you can actually see what’s behind the URL.
For more information on debugging using Quick Look, take a look at the documentation.
Where To Go From Here?
And that’s it for Live View Debugging. It’s an easy tool and can save hours of manually sifting through a view hierarchy trying to understand how and where it’s drawing views.
If you’re looking for a more advanced and comprehensive tool than just Xcode, then take a look at Reveal. While it’s a paid app, it’s also more powerful than Xcode’s view debugging. You can view our Tech Talk on the subject over here.
We hope you enjoyed this tutorial and feel a little more comfortable debugging your UI. If you have comments or questions, please use the forum discussion below! | https://www.raywenderlich.com/98356/view-debugging-in-xcode-6 | CC-MAIN-2018-13 | refinedweb | 3,857 | 59.43 |
GETPWENT(3V) GETPWENT(3V)
NAME
getpwent, getpwuid, getpwnam, setpwent, endpwent, setpwfile, fgetpwent
- get password file entry
SYNOPSIS
#include <<pwd.h>>
struct passwd *getpwent()
struct passwd *getpwuid(uid)
uid_t uid;
struct passwd *getpwnam(name)
char *name;
void setpwent()
void endpwent()
int setpwfile(name)
char *name;
struct passwd *fgetpwent(f)
FILE *f;();
The fields pw_quota and pw_comment are unused; the others have meanings
described in passwd(5). When first called, getpwent() returns a
pointer to the first passwd structure in the file; thereafter, it
returns a pointer to the next passwd structure in the file; so succes-
sive calls can be used to search the entire file. getpwuid() searches
from the beginning of the file until a numerical user ID matching uid
is found and returns a pointer to the particular structure in which it
was found. getpwnam() searches from the beginning of the file until a
login name matching name is found, and returns a pointer to the partic-
ular structure in which it was found. If an end-of-file or an error is
encountered on reading, these functions return a NULL pointer.
A call to setpwent() has the effect of rewinding the password file to
allow repeated searches. endpwent() may be called to close the pass-
word file when processing is complete.
setpwfile() changes the default password file to name thus allowing
alternate password files to be used. Note: it does not close the pre-
vious file. If this is desired, endpwent() should be called prior to
it. setpwfile() will fail if it is called before a call to one of get-
pwent(), getpwuid(), setpwent(), or getpwnam() , or if it is called
before a call to one of these functions and after a call to endpwent().
fgetpwent() returns a pointer to the next passwd structure in the
stream f, which matches the format of the password file /etc/passwd.
SYSTEM V DESCRIPTION
struct passwd is declared in pwd.h as:
struct passwd {
char *pw_name;
char *pw_passwd;
uid_t pw_uid;
gid_t pw_gid;
char *pw_age;
char *pw_comment;
char *pw_gecos;
char *pw_dir;
char *pw_shell;
};
The field pw_age is used to hold a value for "password aging" on some
systems; "password aging" is not supported on Sun systems.
RETURN VALUES
getpwent(), getpwuid(), and getpwnam() return a pointer to struct
passwd on success. On EOF or error, or if the requested entry is not
found, they return NULL.
setpwfile() returns:
1 on success.
0 on failure.
FILES
/etc/passwd
/var/yp/domainname/passwd.byname
/var/yp/domainname/passwd.byuid
SEE ALSO
getgrent(3V), issecure(3), getlogin(3V), passwd(5), ypserv(8)
NOTES
The above routines use the standard I/O library, which increases the
size of programs not otherwise using standard I/O more than might be
expected.
setpwfile() and fgetpwent() are obsolete and should not be used,
because when the system is running in secure mode (see issecure(3)),
the password file only contains part of the information needed for a
user database entry.
BUGS
All information is contained in a static area which is overwritten by
subsequent calls to these functions, so it must be copied if it is to
be saved.
21 January 1990 GETPWENT(3V) | http://modman.unixdev.net/?sektion=3&page=getpwuid&manpath=SunOS-4.1.3 | CC-MAIN-2017-17 | refinedweb | 524 | 50.57 |
lex Ott <alexott@...> seems to think that:
>Hello Eric
>
> EML> Hi Alex, Thanks for the sample. I tracked it down to a parser issue.
> EML> The new stuff Raf put together depends on :template-specifier being
> EML> saved in the :type of the generated tags. This wasn't working for
> EML> any declaration that is specified in a namespace.
>
> EML> I was able to fix this, and a couple other cases. There are
> EML> additional template specifiers that probably cannot be made to work
> EML> in the existing parser without some serious reconsideration on how
> EML> things are assembled. Hopefully as we get the basic cases working,
> EML> that will be sufficient for most uses.
[ ... ]
>
>I tried my sample once again and found, that it still works not properly.
>
>I expect that if i write something like t-> and call
>semantic-ia-complete-symbol, then it should use information from Test
>class, but it displays only list of the members of shared_ptr class. But
>if I provide a one or two characters from function name, then it able to
>complete it. Also, i want to mention, that if i write t->o and press
>complete-symbol, then it also use information from shared_ptr class (show
>list of operators defined in it), not from Test class (function otherfunc)
Hi,
I downloaded shared_ptr.hpp, and shared_ptr_nmt.hpp from the boost
SVN repository. At first I had to setup my include path. Once that
was worked out, I was able to complete from your example as expected
once again.
Do you use the ctags parser as an out-of-buffer parser? If so, I
wonder if it is not providing the info needed. If so, disable ctags
parsing in your .emacs, then delete any semanticdb.cache files related
to boost and try again.
Eric
--
Eric Ludlam: eric@...
Siege: Emacs:
>>> Alex Ott <alexott@...> seems to think that:
>Hello all
>
>I just finished small article about Cedet (currently only in Russian), and
>have several comments about Cedet's functionality.
[ ... ]
>- senator-yank-tag could be non-intuitive, as i copy/kill the full tag
> (function, for example), but it inserts only function's declaration,
> without body. Another potential problem could be, that senator-copy-tag
> doesn't insert tag when i press C-y -- may be it's better to unify it
> behaviour with senator-kill-tag?
I checked in a few minor changes related to this today. Both
copy/kill both do similar things w/ the regular kill ring. I also
added messages to help show what you can do. I also promoted a few
commands into the menu that weren't there before.
Enjoy | https://sourceforge.net/p/cedet/mailman/cedet-devel/?viewmonth=200902&viewday=7 | CC-MAIN-2016-36 | refinedweb | 440 | 65.93 |
#include <iostream> #include <string> using namespace std; //Robin Brust //CS 161/Homework 5 void welcome() { cout<<"Welcome to Robin's guessing game."<<endl; cout<<"The object of the game is to be the user with the"<<endl; cout<<"least points. Player 1 will enter a word that"<<endl; cout<<"player 2 must guess. For every incorrect guess 1 point will"<<endl; cout<<"be added to player ones score. For every correct guess a point"<<endl; cout<<"will be added to player twos score. If you think you know the"<<endl; cout<<"solution you may enter it at anytime. Good luck"<<endl<<endl; cin.get(); } string player_one() { char player_one[20]; cout<<"Player 1 please enter your name"<<endl; cin>>player_one; cin.get(); return (player_one); } void game_play() { char word[25];//Creates memory for the word that is to be guessed char blank[25];//creates blank spaces char guess; cout<<player_one()<<" please enter a word or phrase with 25 or less characters"<<endl; int counter=0; //simple counter int correct=0;//1=correct answer, 0=wrong answer cin.getline (word,25); strlen(word); cout<<"The word is "<<strlen(word)<<" characters long!"<<endl; int length=strlen(word); //stores the length of the word or phrase for (counter = 0; counter < length; counter++)//converts the letters to uppercase { word[counter] = toupper(word[counter]); } strcpy(blank,word);//Copies the word to the 'blank' array for (counter = 0; counter < length; counter++) { if (isalnum(word[counter])) blank[counter] = '_'; //converts characters to _'s to represent blanks else blank[counter] = word[counter]; } char player_two[20]; cout<<"Player 2 please enter your name"<<endl; cin>>player_two; cin.get(); //play game until the 'blank' puzzle becomes the 'right' answer do { cout<<"The current blank puzzle is: "<<blank<<"."<<endl; cout<<player_two<<" enter a guess."<<endl; cin>>guess; guess = toupper(guess); for (counter = 0; counter <= length; counter++) //check guess { if (guess == word[counter]) { blank[counter] = guess; //fill in the puzzle with the letter } } } while (strcmp (word,blank) != 0); cout<<"Winner!"<<endl; cin.get(); } bool play_again() { char again; cout<<"Would you like to play again? y/n "<<endl; cin>>again; cin.get(); return (again == 'y' || again == 'Y'); } int main() { welcome(); do{ game_play(); }while (play_again()); return 0; }
Guessing game and score keeping
Page 1 of 1
Need suggestions on how to keep score in a guessing game
1 Replies - 8707 Views - Last Post: 01 December 2009 - 11:28 AM
#1
Guessing game and score keeping
Posted 30 November 2009 - 07:07 PM
Okay, so I am looking for a way to keep score with this guessing game and I really have no idea where to start. Any suggestions would be appreciated.
Replies To: Guessing game and score keeping
#2
Re: Guessing game and score keeping
Posted 01 December 2009 - 11:28 AM
Since you want:
For every incorrect guess 1 point will
be added to player ones score. For every correct guess a point
will be added to player twos score
You can just declare two variables for keeping score like:
Don't forget to reset each score to zero when they play again.
For every incorrect guess 1 point will
be added to player ones score. For every correct guess a point
will be added to player twos score
You can just declare two variables for keeping score like:
int playerOneScore = 0, playerTwoScore = 0; //then when a correct guess is made (you should know where that occurs in your code) increment player 2's score: playerTwoScore++; //when an incorrect guess is made, increment player 1's score: playerOneScore++;
Don't forget to reset each score to zero when they play again.
Page 1 of 1 | http://www.dreamincode.net/forums/topic/142490-guessing-game-and-score-keeping/ | CC-MAIN-2017-09 | refinedweb | 601 | 65.56 |
Google.
Most Linux developers contribute to the kernel as part of their employment. Here's a look at the top corporate contributors:
"None" refers to developers working on their own without corporate sponsorship, while "unknown" covers developers for whom a corporate affiliation could not be determined. For the purpose of assigning numerical rankings to individual corporations in this article, we're ignoring those two categories as well as the "consultants" category.
Linaro, which makes open source software for ARM systems-on-a-chip, made a huge jump from 0.7 percent of total changes in the 2012 list to 4.1 percent on the latest one. The Linux Foundation noted that Linaro, Samsung, and Texas Instruments, combined, have risen from 4.4 percent of changes to 11 percent, reflecting the growing importance of mobile and embedded systems.
17 million lines of code
The Who Writes Linux reports aren't released precisely on an annual schedule, so the percentage of contributions rather than the total number is the best way to track changes in corporate involvement from one report to the next.
The previous report released in April 2012, covering the period since December 2010, put Google in 11th place with 1.5 percent of contributions. Samsung did not make the top 30 list in that report.
The March 2012 report also compiles all stats since 2005, with Google registering at one percent in that seven-year period and Samsung at 0.6 percent. As noted in the chart above, Samsung and Google now clock in at 2.6 percent and 2.4 percent, respectively.
Google and Samsung have clearly stepped up their contributions to Linux at the same time that Android, which is based on Linux, has increased in popularity and importance. But as it turns out, Android wasn't the main driver pushing the companies into the top 10.
"Google contributions were not Android-related, given there really wasn't much Android code in the first place (7,000 lines of code, much smaller than your serial port driver), and it was all merged a while ago," Greg Kroah-Hartman, who co-authored the report and is the maintainer of the Linux kernel's stable branch, told Ars. "Google's been doing work all over the kernel (networking, security, scheduler, cgroups), all good stuff."
Samsung has likewise made contributions all over the place with a "new filesystem, core kernel work, new driver subsystems, fixes everywhere," Kroah-Hartman said.
The report is being released in conjunction with the start of the LinuxCon conference today in New Orleans. The overall pace of development has increased, with the community merging patches at the rate of 7.14 per hour, up from 6.71 per hour in the last report. The kernel now has almost 17 million lines of code, up nearly two million lines. The report did not rank contributors by lines of code.
Important features merged into the mainline kernel code since early 2012 include "full tickless operation [to lower power consumption], user namespaces, KVM and Xen virtualization for ARM, per-entity load tracking in the scheduler, user-space checkpoint/restart, 64-bit ARM architecture support, the F2FS flash-oriented filesystem, many networking improvements aimed at the latency and bufferbloat problems, two independent subsystems providing fast caching for block storage devices, and much more," the report said.
Linux Foundation Executive Director Jim Zemlin noted that while Apple touted 64-bit support in the new iPhone, "that's done in Linux, has been done for a long time. The Android ecosystem just picks that up by default, they don't have to go through any special development process to do that."
More good news is that the "longstanding squabble over Android-specific kernel features has faded completely into the background," the report said. "The much discussed 'wakelocks' [power management] feature has been quietly replaced by a different mainline solution which is used in the latest Android devices."
Microsoft isn't dumping Windows for Linux, it turns out
Among other notable changes, Microsoft fell off the list after briefly being one of Linux's most prolific contributors. Microsoft's 688 changes accounted for one percent of the total in the 2012 list.
Microsoft, in 2009, submitted drivers for its Hyper-V virtualization platform to the Linux kernel. The project hit a bunch of delays, with Microsoft struggling to get its drivers out of the staging area and into the mainline kernel. In 2011, Microsoft became the fifth largest corporate contributor to Linux kernel version 3.0 as it attempted to clean up its code. Redmond's contributions have predictably dropped off now that the code is more stable.
Nvidia (which famously pissed off Linus Torvalds with allegedly low-quality work) is in 13th place with 1.3 percent of changes on the most recent list. Nvidia did not make the 2012 list.
Canonical contributed 548 changes in the period studied in today's report and would have cracked the top 30 if the foundation hadn't included "none" and "unknown" in the rankings. The maker of Ubuntu has been criticized for years for not contributing more to the kernel (and recently clashed with Intel over patches submitted to the company's graphics driver), but Zemlin isn't upset. "I am not at all mad at them. I think they do good work," he said. "You just kind of choose what's important to your org and where you can add the most value to the project and how it intersects with your business. They choose to focus a lot of their time and energy at higher levels of the software stack."
Among individuals submitting code to Linux, longtime contributor Al Viro led the way with 4,124 changes, 1.2 percent of the total. Linus Torvalds, who created Linux and makes the final decisions on what goes into the kernel, did not make the top 30 list. But that's no surprise. Torvalds and many of the other senior kernel developers "spend much of their time getting other developers’ patches into the kernel; this work includes reviewing changes and routing accepted patches toward the mainline," the report said.
Besides tracking the submitted changes, the Linux Foundation report measures "signoffs," the review process conducted by kernel leads. Here's another chart showing the number of signoffs by company affiliation:
These are just approximate numbers since "the signoff metric is a loose indication of review," the report said.
For individuals, getting your name in the Who Writes Linux report is a path to prosperity, Zemlin said. "The competitiveness to hire all these people is palpable," he said. "That's one of the top calls I get: 'How do I hire these guys? How can I hire more kernel guys?' If you're even just getting a few patches into the Linux kernel regularly, that is a guaranteed employment type of thing."
The number of contributions from unpaid developers decreased from 14.6 percent to 13.6 percent in the latest report. Even though the source code is free, Linux is no volunteer project—it's enormously important to the businesses of Intel, IBM, Red Hat, and others, as their appearances at or near the top of every list reflect. The most plausible explanation for the decrease in volunteer developers may be that they are getting hired because of their demonstrated ability to write code, the Linux Foundation said.
67. | http://arstechnica.com/information-technology/2013/09/google-and-samsung-soar-into-list-of-top-10-linux-contributors/?comments=1&post=25299725 | CC-MAIN-2016-36 | refinedweb | 1,234 | 61.56 |
Dj.
Django.js requires Python 2.6+ and Django 1.4.2+.')), ... )
The documentation is hosted on Read the Docs)
Added namespaced URLs support
Upgraded to Jasmine 1.3.1
Added JsFileTestCase to run tests from a static html file without live server
Added JsTemplateTestCase to run tests from a rendered template file without live server
Expose PhantomJS timeout with PhantomJsRunner.timeout attribute
Upgraded to jQuery 1.8.3
Upgraded to Jasmine 1.3.0
Synchronous URLs and context fetch.
Use django.utils.termcolors
Only one Django.js test suite
Each framework is tested against its own test suite
Make jQuery support optionnal into JsTestCase
Improved JsTestCase output
Drop Python 2.6 support
Added API. | https://pypi.org/project/django.js-vinta/ | CC-MAIN-2016-44 | refinedweb | 115 | 61.33 |
This is the first part of a two-part tutorial. You can find Pt.1 here.
Introducing Kublr-Managed Kubernetes
You don’t have to install and maintain a complex cluster to benefit from all of the Kubernetes features. Instead, you can have Kublr manage installation and maintenance.
In this tutorial, we’ll demonstrate how easy it is to deploy a highly available Kubernetes cluster on top of your existing AWS infrastructure.
To begin using Kublr, login to your account and specify the AWS credentials that the Kublr platform will use to create and manage the needed resources for the Kubernetes cluster.
Select “Credentials” from the left sidebar, and click “Add credentials”.
You must create an IAM user and IAM policy, and provide the access key and secret key of that IAM user. You can do this in your AWS account by navigating to “Services > IAM > Users” and clicking “Add user”.
Verify you have “Programmatic access” selected as shown below:
Click “Next”, select “Attach existing policies directly”, then click “Create Policy”.
On the new tab, select “Create your own policy”. Complete the “Policy name” and “Description” fields. The most important field is “Policy Document,” which specifies what permissions this policy grants the user.
To copy the needed IAM Policy JSON that you should paste into this field, click “View minimal required permissions” as shown below:
Insert the JSON into “Policy Document”, and click “Create policy” to confirm creation.
Return to the previous tab, and click “Refresh” to see your newly created policy. Select the policy, and click “Next”.
Complete the user creation by clicking “Create user”. If everything was completed successfully, you should see the following:
Download your newly created Access key and Secret key, and keep these in a safe place.
Navigate back to Kublr and complete the “Credentials” creation step by filling in the keys and clicking “Save Credentials”. You will see the following message box if your IAM user is working and the policy was set correctly. (I named my credentials “aws-user-1”.)
You can now create a new cluster, navigate to “Clusters” in the left sidebar, and click “Add Cluster”. Fill in the cluster name, select the AWS region to deploy to, the credentials to use (we have only one at the moment), the SSH key pair of choice (if you want the option to login to Kubernetes Master nodes that Kublr creates for you and explore the setup), and the operating system to use (Ubuntu 16 by default, at this time).
The most important decisions during cluster creation are the number of “Masters” and their “Instance Type”. For a production cluster, it’s recommended you use at least an “m4.large” instance type. But for our tutorial we can create a highly available cluster using smaller instances; so select “3” masters and a “t2.small” instance type.
The “Advanced options” will reveal an option to select availability zones manually, but we’ll leave this decision to Kublr. Proceed with selecting how many worker nodes to start with (these will be running your “pods”). We’ll specify 3 and, for their instance type, we’ll specify “t2.small” for this tutorial. You can also turn on the “auto scaling“ option in “Advanced options”.
As you can see, we have set the maximum number of nodes to 10, so Kublr will only scale up to 10 worker nodes.
For logging and monitoring options, you have a choice of setting up Elasticsearch and Kibana automatically during cluster creation or using CloudWatch logs.
CloudWatch is recommended for production workloads. Let’s choose both “Self-hosted Elasticsearch/Kibana” and “Cloud Watch Logs”. This is possible because we can stream the logs to several targets.
Next, select the monitoring option:
We’ll select self-hosted InluxDB/Grafana out of the box. If you prefer, you can also see the metrics in your Cloud Watch service.
The final section of the configuration is “Ingress”. You can enable this option to get ingress controllers deployed and ready to use. For this tutorial, you can either skip this option or enable it so you can try out ingress rules later.
Click “Confirm and install”, to create your cluster. You should see that the cluster is now being created. Click the “Overview” tab.
This page displays all information about your cluster setup. You should wait approximately five to ten minutes for AWS to create all requested resources (EC2 instances, EBS volumes, Auto scaling groups) and for Kublr to install all components into the cluster and bootstrap the master nodes. The Kubernetes documentation describes the great amount of work to create a custom cluster from scratch. This is why Kublr exists: to abstract away the complexity and allow the user to benefit from all features that Kubernetes provides.
When the process completes, open the Kubernetes dashboard from the “Overview” tab.
You will see the list of nodes available in your cluster. Navigate to “Deployments,” and click “deploy a containerized app”.
Save the following to a file named “blue-website-deployment.yaml”:
apiVersion: apps/v1beta1 kind: Deployment metadata: name: blue namespace: default spec: replicas: 1 template: metadata: labels: name: blue spec: containers: - name: blue-website image: aquamarine/kublr-tutorial-images:blue resources: requests: cpu: 0.1 memory: 200 --- apiVersion: v1 kind: Service metadata: name: blue-website spec: ports: - port: 80 targetPort: 80 protocol: TCP name: http selector: name: blue
This contains a “Deployment” definition and a “Service”. These are Kubernetes resources. The “deployment” ensures an instance of our blue website container will run all the time and, if a worker node gets terminated and the container shuts down, Kubernetes will reschedule it on another node. The “Service” resource creates a new load balancer in your AWS account, and configures it to point to the blue website container.
Select “Upload a YAML or JSON file”, chose blue-website-deployment.yaml, and click “Upload”.
Both “deployment” and “service” will be created, and you can see this in the dashboard.
To see the “Service,” which is both an internal and external load balancer in our case (because the service manifest from our uploaded file contained the “type: LoadBalancer” parameter), click “Services” in the left sidebar.
You can see the “blue-website” service on the screenshot below:
Click the external endpoint to navigate to our website, and you will see the blue demo page we just deployed.
After deploying the application, it’s time to check the logs to see if your application has any issues. As you’ll remember, we have installed our cluster with both logging options, Elasticsearch with Kibana and Cloud Watch logging target enabled. To access Kibana, download the configuration file from the Kublr “Overview” tab.
Copy the “Basic Auth” username and password.
Open Kibana and log in.
When first logging in, you’ll see the creating index pattern page. Click “Create” and navigate to the “Discover” tab.
Here you can find all “stdout” and “stderr” outputs of all your containers and all Kubernetes components. Based on that, you can set alerts with ElastAlert (a popular tool for querying Elasticsearch periodically to detect patterns of error spikes, traffic anomalies, etc), and do any kind of analytics that this great EFK combo allows. (“E” for Elasticsearch, the search engine for all your logs; “F” for fluentd, the log collector; and “K” for Kibana, the querying and dashboarding solution.)
You can also see the logs in CloudWatch. If your current infrastructure is tightly integrated with this service already and you rely on CloudWatch Alarms and use the Events and Metrics, this logging option might be a better choice for you. To see the logs written by Kubernetes, navigate to CloudWatch, Logs, and click the log group.
You will see the list of nodes from which those logs were collected. Select any of them, and try to search for a “tutorial” keyword. The node which hosts our demo container will have a few messages about pulling our docker image.
Let’s have a quick look at Grafana, the metrics dashboard. All container metrics are collected and stored in InfluxDB, which is a time series database. Grafana allows us to visualize those metrics. Here is an advanced tutorial on Grafana.
Navigate to Grafana from the “Overview” tab in Kublr, and use the same “Basic Auth” credentials we used for Elasticsearch. Then click the top menu, and select the “Cluster” dashboard.
You will see both the overall cluster utilization and per node breakdown. You can select nodes to inspect from the top “nodename” combo box.
To see per container drilldown, select the “Pods” dashboard from the top menu. You can inspect every container utilization and setup alerts. Read more about Grafana alerting and notification features.
The possibilities are endless when using Kubernetes. But with ECS you cannot easily integrate Grafana metric collection or log collection to Elasticsearch. Getting logs from ECS services and instances is complicated. Kubernetes, however, allows us to use the latest features of all the open source technologies in this short tutorial, right out of the box.
Our previous tutorial covers almost every aspect of a recent Kubernetes release and describes possible use cases. Review the tutorial to learn about ingress rules and ingress controllers (advanced load balancing and routing), Stateful Sets and Replica Sets (stateful containers, with persistent storage, and fast scaling or failure recovery of replica sets), Config Maps and Secrets (helpful tools to pass sensitive information or dynamic templates to a container), and much more. To learn more about containerized stateful databases and services, check out our tutorials about MySql replication cluster on Kubernetes and How to run a MongoDB replica set on Kubernetes.
If you find the ECS service limited and not easy to use (when navigating through all resources and trying to understand the current state of 20–30 microservices when your infrastructure and application starts to grow) and your team does not have the capacity to learn the inner workings of Kubernetes master nodes and maintain the cluster manually, consider trying Kublr-in-a-Box. | https://kublr.com/blog/comparing-aws-ecs-and-self-managed-kubernetes-kublr-managed-tutorial/ | CC-MAIN-2019-18 | refinedweb | 1,663 | 63.29 |
On Mon, Dec 13, 2010 at 5:25 PM, Daniel Koch <daniel@transgaming.com> wrote: > > On 2010-12-13, at 4:10 PM, Kenneth Russell wrote: > > On Mon, Dec 13, 2010 at 11:15 AM, Chris Marrin <cmarrin@apple.com> wrote: > > On Dec 13, 2010, at 9:28 AM, Kenneth Russell wrote: > > ... > > Having this extension improves the testability of the browser's lost > > context implementation. It would even add value to your wrapper by > > exercising the real code paths the WebGL implementation would use. > > I would like to add this extension to the registry and support it in > > Chromium. Vlad, Benoit, is there any interest from Mozilla in > > supporting this extension to exercise your implementation? Tim, how > > about Opera? If there isn't interest from any other vendor then we can > > just use the CHROMIUM_ prefix, but I think it would be useful more > > generally. > > >. > > There was discussion about having another prefix for "experimental" > but cross-vendor extensions, but I think that's overkill. > > -Ken > > On Desktop GL (and GL ES) multi-vendor extensions use the EXT_ prefix... I > don't see any reason that couldn't be used in WebGL as well. It is very likely that we will wrap existing EXT_ extensions for WebGL. If WebGL-specific extensions used this prefix, there would be a namespace conflict. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl@khronos.org. To unsubscribe, send an email to majordomo@khronos.org with the following command in the body of your email: | https://www.khronos.org/webgl/public-mailing-list/archives/1012/msg00164.html | CC-MAIN-2015-11 | refinedweb | 247 | 66.54 |
Informational observations
There are some messages that the C# compiler could give that really wouldn't fall into the warning or error camp. These would fall into the category of more informational, like "hey did you know that..." and would refer to issues that you could choose to address or not with no ill affects. A warning would be something like:
public class Class {
private int property;
public int Property {
get {
return Property;
}
}
}
"Hey! Returning the containing property could potentially lead to an infinite recursion at runtime".
or:
public class Class {
int value;
public Class(int value) {
value = value;
}
}
"Hey! Assigning a value into itself has no affect."
These are warnings on perfectly legal code but in most cases it's probably not what you wanted to do and usually indicates an error. So while the message might be spam, it seems a reasonable tradeoff to give it to you.
An informational message is more like:
using System.Collections;
public class Class {}
"The 'using System.Collections;' state was unncessary and can be removed".
I see this sort of message as different than before because it's not really telling you of a potential problem in your code. (Unless you expected that System.Collections was going to be used and now you're shocked that it wasn't). It's also not really a risky thing to tell the user. If you remove the "using" and then attempt to use a type from that namespace (like IList) we'll then give you the smart tag immediately to add it in, so it's not going to be really annoying to be adding/removing namespaces all the time.
Would these kind of messages be appreciated? If so, would there be others that you'd find useful? Tiny little notes that aren't necessary but would help you out with commong coding scenarios, etc. | https://docs.microsoft.com/en-us/archive/blogs/cyrusn/informational-observations | CC-MAIN-2020-16 | refinedweb | 311 | 64.91 |
> When I do the transform I have a namespace > "xmlns: getting added to all the tags I generate. I found that this > somehow has to > do with the transform not happening. > > First I would like to suppress these namespaces from > appearing all over the document In order to get rid of them, we need to know where they are coming from. To know that, we need to see your stylesheet. > > second, Iam transforming from xsl to xsl. So I had an alias > setup in my > original stylesheet so all my xsl tags are in the > namespace-alias "x:". > I was then doing a find/replace for all x to xsl and performing the > transform. Now Iam not in a position to do a find/replace. > Can someone tell > me why the namespace-alias didn't rename all the tags to "xsl:". I suspect you have completely misunderstood what namespace-alias does, but I can't be sure without seeing your code. > > Thirdly, > Why doesn't the processor process the stylesheet in the "x:" > namespace. I > have it as follows > > <x:stylesheet xmlns: xmlns: > There's nothing wrong with that bit of code; your error must be elsewhere in the part you haven't shown us. Michael Kay | https://www.oxygenxml.com/archives/xsl-list/200501/msg00945.html | CC-MAIN-2020-40 | refinedweb | 207 | 80.31 |
This is the second of a two-part series on using social media to locate eyewitnesses to important events. In part one, I showed you how to use the Instagram API to find eyewitnesses to a live video shoot of Macklemore's in Seattle. In this part, we'll use the Twitter API to find attendees of President Obama's speech in Selma at the Edmund Pettus Bridge.
You can download code for both episodes by using the GitHub repository link in the sidebar. You may also be interested in my Tuts+ series, Building With the Twitter API.
Twitter's geosearch capabilities are more limited and therefore require a bit more detailed code to use. Geotagged posts on Twitter can only be found from the last seven days. And they are only searchable by date (not time), so you have to filter the API results for precision.
I do participate in the discussions below. If you have a question or topic suggestion, please post a comment below. You can also reach me on Twitter @reifman or email me directly.
What We Covered in Part One
The phones we carry in our pockets record our every move, sharing it with cell providers and often third-party software companies whose motivations generally focus on profit..
So, geotagging can be used for good. In this series, we're exploring how journalists or law enforcement might locate potential eyewitnesses to important events such as a crime or accident scene using social media.
However, geotagging can also be used abusively. Berkeley computer scientists and educators built the Ready or Not? app to showcase how geotagging in Twitter and Instagram record our every move.
Here's Apple co-founder Steve Wozniak's Twitter account in the app:
The geotagging on Instagram and Twitter is accurate enough to allow someone to easily determine your residence, place of work and travel routine.
In this episode, I'll guide you through using the Twitter.
If you're a law enforcement agency or media entity that would like more information, please feel free to contact me directly. I would also be interested in any successful uses of this code (for good)—they'd make an interesting follow-up story.
What We Did With Instagram
Last episode, we used the Instagram API to find eyewitnesses to Mackelmore's live 2013 video shoot for White Cadillac. Quite easily, we managed to find Instagram member Joshua Lewis's photo of Macklemore stepping out of his vehicle (cool, huh?):
Now, let's get started using the Twitter API.
Using the Twitter API
As with Instagram, you need to sign in to your Twitter account and register a developer application. You should register an app like this:
Twitter will show you your application details:
Here's the settings page:
Here are the keys and access tokens for the application. Make note of these.
Then, scroll down and create access tokens for your account. Make note of these too.
Add all four of these configuration keys and secrets to your
/var/secure/eyew.ini file:
mysql_host="localhost" mysql_db="eyew" mysql_un="xxxxxxxxx" mysql_pwd="xxxxxxxxxxxx" instagram_client_id = "4xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx7" instagram_client_secret = "1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx4" twitter_key = "zxxxxxxxxxxxxxxxxxxxx2" twitter_secret ="4xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxp" twitter_oauth_token="1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxs" twitter_oauth_secret="exxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxV"
Then, we'll create an Active Record migration to create our Twitter model. This will store the tweets we receive from the API calls.
<?php use yii\db\Schema; use yii\db\Migration; class m150309_174014_create_twitter_table extends Migration { public function up() { $tableOptions = null; if ($this->db->driverName === 'mysql') { $tableOptions = 'CHARACTER SET utf8 COLLATE utf8_unicode_ci ENGINE=InnoDB'; } $this->createTable('{{%twitter}}', [ 'id' => Schema::TYPE_PK, 'moment_id' => Schema::TYPE_INTEGER . ' NOT NULL', 'tweet_id' => Schema::TYPE_BIGINT . ' NOT NULL', 'twitter_id' => Schema::TYPE_BIGINT . ' NOT NULL', 'screen_name' => Schema::TYPE_STRING . ' NOT NULL DEFAULT 0', 'text' => Schema::TYPE_TEXT . ' NOT NULL ', 'tweeted_at' => Schema::TYPE_INTEGER . ' NOT NULL', 'created_at' => Schema::TYPE_INTEGER . ' NOT NULL', 'updated_at' => Schema::TYPE_INTEGER . ' NOT NULL', ], $tableOptions); $this->addForeignKey('fk_twitter_moment', '{{%twitter}}', 'moment_id', '{{%moment}}', 'id', 'CASCADE', 'CASCADE'); } public function down() { $this->dropForeignKey('fk_twitter_moment','{{%twitter}}'); $this->dropTable('{{%twitter}}'); } }
Just as we did in part one, you need to run the migration:
./yii migrate/up Yii Migration Tool (based on Yii v2.0.3) Total 1 new migration to be applied: m150309_174014_create_twitter_table Apply the above migration? (yes|no) [no]:yes *** applying m150309_174014_create_twitter_table > create table {{%twitter}} ... done (time: 0.008s) > add foreign key fk_twitter_moment: {{%twitter}} (moment_id) references {{%moment}} (id) ... done (time: 0.007s) *** applied m150309_174014_create_twitter_table (time: 0.019s) Migrated up successfully.
Then, I used Yii2's code generator, Gii, to create the model and CRUD controllers for the Twitter model. If you get the latest GitHub repository code using the sidebar link on this tutorial, you'll have the code as well.
Create a New Moment
Because Twitter limits geolocation searches to the past week, I eventually chose President Obama's Selma 50th Anniversary speech at the Edmund Pettus Bridge.
I used Google Maps again to get the GPS coordinates for the bridge:
Then, I created a Moment for the speech to search. I updated it a few times to tweak the geographic radius of the search (it's a bridge) and the time range:
Search Using the Twitter API
The limitations of the Twitter API are that it only allows you to search by date, e.g. 2015-03-07, whereas Instagram is indexed by precise Unix timestamps. Therefore, we have to begin our Twitter search a full day ahead and search backwards.
Since we're likely to obtain a lot of tweets outside our desired time range, we have to make repeated calls to the Twitter API. Twitter returns up to 100 tweets per API request, and allows 180 requests per 15-minute window.
I'm using James Mallison's Twitter API Library for PHP. Here's how we set up the library to make calls:
<?php namespace app\models; use Yii; use yii\db\ActiveRecord; use app\models\Gram; use Instagram; use TwitterAPIExchange; ...);
Initially, we request 100 results from Twitter at our GPS coordinates up to a specific date.); // Query settings for search $url = ''; $requestMethod = 'GET'; // rate limit of 180 queries $limit = 180; $query_count=1; $count = 100; $result_type = 'recent'; // calculate valid timestamp range $valid_start = $this->start_at; // $until_date and $valid_end = // start time + duration $valid_end = $this->start_at + ($this->duration*60); Yii::trace( 'Valid Range: '.$valid_start.' -> '.$valid_end); $until_date = date('Y-m-d',$valid_end+(24*3600)); // add one day $distance_km = $this->distance/1000; // distance in km // Unused: &since=$since_date // $since_date = '2015-03-05'; // Perform first query with until_date $getfield ="?result_type=$result_type&geocode=".$this->latitude.",".$this->longitude.",".$distance_km."mi&include_entities=false&until=$until_date&count=$count";
We only record tweets within our precise time range, ignoring the other results. As we process these, we make note of the lowest tweet ID received.
$tweets = json_decode($twitter->setGetfield($getfield) ->buildOauth($url, $requestMethod) ->performRequest()); if (isset($tweets->errors)) { Yii::$app->session->setFlash('error', 'Twitter Rate Limit Reached.'); Yii::error($tweets->errors[0]->message); return; } $max_id = 0; Yii::trace( 'Count Statuses: '.count($tweets->statuses)); Yii::trace( 'Max Tweet Id: '.$max_id); foreach ($tweets->statuses as $t) { // check if tweet in valid time range $unix_created_at = strtotime($t->created_at); Yii::trace('Tweet @ '.$t->created_at.' '.$unix_created_at.':'.$t->user->screen_name.' '.(isset($t->text)?$t->text:'')); if ($unix_created_at >= $valid_start && $unix_created_at <= $valid_end) { // print_r($t); $i = new Twitter(); $i->add($this->id,$t->id_str,$t->user->id_str,$t->user->screen_name,$unix_created_at,(isset($t->text)?$t->text:'')); } if ($max_id ==0) { $max_id = intval($t->id_str); } else { $max_id = min($max_id, intval($t->id_str)); } }
Then we loop, making repeated requests to Twitter (up to 179 more times), requesting additional records that are earlier than the previous batch's lowest tweet ID. In other words, on subsequent requests, instead of querying up to a specific date, we query to the max_id of the lowest tweet ID that we've received.
We stop when less than 100 records are returned or when returned tweets are earlier than our actual range.
If you need access to more than 18,000 tweets, you'll need to implement a background task to call the Twitter API, as we've done in our other Twitter API series.
As we process API results, we need to filter tweets, only recording those that fall within our actual start time and end time.
Note: The Twitter API has a lot of frustrating quirks which make paging more difficult than it should be. Quite frequently Twitter returns no results without an error code. Other times, I found it returning a small number of results, but that didn't mean that another request would not return more. There are no very clear ways to know when Twitter is done returning results to you. It's inconsistent. Thus, you may notice my code has a few interesting workarounds in it, e.g. examine $count_max_repeats.
$count_repeat_max =0; // Perform all subsequent queries with addition of updated maximum_tweet_id while ($query_count<=$limit) { $prior_max_id = $max_id; $query_count+=1; Yii::trace( 'Request #: '.$query_count); // Perform subsequent query with max_id $getfield ="?result_type=$result_type&geocode=".$this->latitude.",".$this->longitude.",".$distance_km."mi&include_entities=false&max_id=$max_id&count=$count"; $tweets = json_decode($twitter->setGetfield($getfield) ->buildOauth($url, $requestMethod) ->performRequest()); if (isset($tweets->errors)) { Yii::$app->session->setFlash('error', 'Twitter Rate Limit Reached.'); Yii::error($tweets->errors[0]->message); return; } // sometimes twitter api fails if (!isset($tweets->statuses)) continue; Yii::trace( 'Count Statuses: '.count($tweets->statuses)); Yii::trace( 'Max Tweet Id: '.$max_id); foreach ($tweets->statuses as $t) { // check if tweet in valid time range $unix_created_at = strtotime($t->created_at); if ($unix_created_at >= $valid_start && $unix_created_at <= $valid_end) { $i = new Twitter(); $i->add($this->id,$t->id_str,$t->user->id_str,$t->user->screen_name,$unix_created_at,(isset($t->text)?$t->text:'')); } else if ($unix_created_at < $valid_start) { // stop querying when earlier than valid_start return; } $max_id = min($max_id,intval($t->id_str))-1; } if ($prior_max_id - $max_id <=1 OR count($tweets->statuses)<1) { $count_repeat_max+=1; } if ($count_repeat_max>5) { // when the api isn't returning more results break; } } // end while
One of the first results returned included the tweet below by Fred Davenport showing President Obama on stage:
Here's it is on Twitter:
Then, as you browse the results further, you can find many more people present tweeting about Obama—including the media:
Now, let's do a more local search.
A Second, More Local Search
Key Arena is Seattle's large concert and sports arena. This past weekend they held the Pac-12 Women's Basketball Tournament:
Let's get our GPS coordinates for Key Arena from Google Maps:
Then, I created and tweaked a moment to find a longer time range for the weekend of tweets:
And, here are some of the results. My favorite is:
"I wanna leave this basketball game. I hate basketball."
For the most part, it seems to me that Instagram's API is far more powerful than Twitter's and yields generally more intriguing results. However, it depends on the kind of person that you're looking for. If you just want to identify people who were there, either API works well.
What We've Learned
I hope you've enjoyed this series. I found it fascinating and was impressed by the results. And it highlights the concerns we should all have about our level of privacy in this interconnected digital age.
The APIs for Instagram and Twitter are both incredibly powerful services for finding social media users who were nearby certain places at certain times. This information can be used for good and it can be abused. You should probably consider turning off your geolocation posting—follow the links at the Ready or Not? app.
You may also want to check out my Building With the Twitter API series, also on Tuts+.
| https://code.tutsplus.com/tutorials/using-social-media-to-locate-eyewitnesses-the-twitter-api--cms-23580 | CC-MAIN-2020-45 | refinedweb | 1,905 | 55.24 |
XML Key Management Services WG
DRAFT 27th August 2003 XKMS Teleconference Minutes
Chairs: Stephen Farrell, Shivaram Mysore
Note Takers: Shivaram Mysore
Last revised by $Author: smysore $ $Date: 2003/09/03 00:25:52 $
Participants
Shivaram Mysore, Sun Microsystems
Stepehen Farrell, Trinity College, Dublin
Frederick Hirsch, Nokia Mobile Phones
Phill Hallam-Baker, Verisign
Yassir Elley, Sun Microsystems
Rich Salz, Datapower
Regrets
Jose Kahan, W3C
Status Update
XKMS specification in close to CR form has been posted by Phill - see [
]. Apart from boiler plate fixes, all other outstanding issues including examples have been addressed. Also updated namespace and schema to support exclusive C14N in the transform section.
Shivaram is working on a preliminary version of the interop matrix with Phill, Blair, and Stephen. Will post a version soon to the list.
Minutes
AI:
Stephen and Shivaram to work with Jose and go thro' the pubrules checker and scrub the latest version of the spec to prepare for posting it as a CR.
AI:
Draft CR->PR implementation requirements - Shivaram and Phill - Check compilance on spec.
Minimal vs complete - ex. XKISS only implementation is it compliant?
Minimum 2 implemenations for any feature to be in the interop document. If 2 implementations are not available for a feature, then that feature will be dropped.
AI:
Shivaram to ping Tony Nadlin, IBM for Interop
AI:
Rich Salz, Datapower, to send to the list about implememting XKMS - Datapower has indicated that they will be participating in the interop
Done
AI:
All - Any nits to be sent to the list ASAP regarding the spec.
Shooting for an Interop matrix alpha candidate during November time
Next Telecon
Next Telecon
Date: September 24, 2003; Time - 8:30AM - 9:30AM | http://www.w3.org/2001/XKMS/Minutes/030827-tele.html | crawl-001 | refinedweb | 280 | 50.26 |
I’ve been messing around around with the ms translator api () and thought I’d post my sample code.
A quick app that speaks with a localized French accent.
8 thoughts on “MS translator api text to speech WP7”
Hello,
I’ve been trying to run your code, with not a lot of luck…
It appears to have a migration problem, I tried following this guide , but still, I can’t run the code.
I’ve changed the namespaces but I still get a type ‘phoneNavigation:PhoneApplicationPage’ was not found.
could you please give me a hand?
thanks
Eyal
I have just tried this project with the latest tools and it works ok for me. Please let me know more about the issue/error you are seeing…
Does this need a working internet connection?
Yes, it uses the Bing APIs for which you need an internet connection.
Managed to compile but got the message upon click the Speak button:
Custom tool warning: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information
File : Regerence.svcmap
Please help. Many thanks.
hi shirley,
Please read this
There are also some workarounds here…. | https://peted.azurewebsites.net/ms-translator-api-text-to-speech-wp7/?replytocom=691 | CC-MAIN-2020-10 | refinedweb | 197 | 80.82 |
Doing Your Duty (Cycle)
A certain part's end-of-life announcement has me rewriting some old pulse width modulation (PWM) code that I haven't touched in quite a few years. PWM is a fundamental technique used to control things like lights and motors and even generate analog voltages (with a little external help).
Imagine you are in a room with a single light and a switch. Turn the switch on and you can see because the light is on. Turn the light off and the room goes dark. What if you could switch the light on and off very fast? Suppose you turned it on for a hundredth of a second and then turned it off for another hundredth and just kept going like that. You'd see the light on, but it would appear to be only half as bright as before because your eye averages the light that hits it over a longer time than the switching rate. If you turned the light on 90% of the time, you'd see nearly the full brightness; and if you kept the light off 90% of the time, you'd see very dim light. That percentage, by the way, is called the "duty cycle."
This is a powerful concept because it allows apparent analog control using nothing but a digital signal. The analog part occurs in the physical actuator (in this case, your eye). A motor will work the same way, and it is actually better to control a motor with a pulse than with an analog voltage. Motors often react differently to different voltages, and may produce lower torque at a reduced voltage. But if the motor is turned on full and then turned off, it can operate normally — but the mechanical averaging will produce the desired speed.
There are two common ways to generate PWM. The first thing you'd probably think of is just to pick a maximum cycle time and then switch things on and then off over that cycle. To make things simple, suppose you had a total cycle time of 100 milliseconds. Then a 40% duty cycle would be a true output for 40 milliseconds followed by 60 milliseconds of a zero output.
Here's a simple C language simulation of this algorithm:
#include <stdio.h> int pwm(unsigned t, unsigned dc) { unsigned cycle=t%256; return cycle<dc; } int main(int argc, char *argv[]) { unsigned t; unsigned dc=128; if (argc==1) { fprintf(stderr,"equal; }
In the code, the cycle time is 256 "ticks" (which can be anything you want them to be since it is only simulated). The duty cycle ranges from 0 to 255, so 50% is 128, for example. You can't quite get to 100%, although you can go to zero percent (try it — just run the program with a duty cycle from 0 to 255 on the command line). The output is sent to
stdout (catch it in a file and open in an editor that won't wrap long lines).
This is simple, which is good since you want to go fast. However, it leads to a fixed frequency signals. If the tick time in the example code is 1 millisecond, the frequency output will always be just under 4Hz. The only question is how long is it off/on, but the total time is always 1/256 of a second.
Obviously, the frequency plays an important part in PWM designs. If you turned the light on and off only once per minute, your eye is fast enough to not be fooled. Other physical systems will also have some maximum time you can use as a cycle. In some cases, a motor may have some resonance with a certain frequency and that can lead to unwanted noise or other effects. So in many cases, it is actually good to have a fixed frequency.
On the other hand, what happens if you change the PWM value? Unless you don't mind transients that look like a higher or lower duty cycle, you only want to change the duty cycle at the start of a new cycle. Using the aforementioned scheme, that means you can't change your output any faster than every 256 ticks.
However, that time cycle is the worst case. Consider a 50% duty cycle (128 as the program input). The fastest way to generate that would be to simply turn the output on, then off, and so on for every cycle. If each tick in the previous example was 1 millisecond, a 50% duty cycle signal could have a period of 2 milliseconds or 500Hz. Much faster than 4Hz.
But hat is the best case. If you want to produce a 1/255, you still need 256 cycles. An easy way to calculate this is to simply add the duty cycle to an accumulator and output the overflow bit. Assuming byte-wide quantities, you wind up with the minimum period (maximum frequency) possible for that duty cycle.
Consider 128 (50%) again. You'd see something like this:
For 64 (25%) it would look like:
I picked numbers that evenly divide 256 so the sequence always repeats, but it works even if you don't. Here's the C code:
#include <stdio.h> int pwm(unsigned t, unsigned dc) { static unsigned acc=0; int retval=0; acc+=dc; // set output to carry if (acc>=256) retval=1; // simulate byte operation here acc=acc&0xFF; return retval; } int main(int argc, char *argv[]) { unsigned t; unsigned dc=128; if (argc==1) { fprintf(stderr,"prop; }
Try a few examples and you'll see that it works. Of course, in real life, you'd want to use assembly language interrupt service routines driven by timer interrupts. In some ways, that's actually easier because the timing is done for you and byte-wide arithmetic is natural (or whatever word size you choose).
In addition to motors and lights, you can use a simple RC (resistor/capacitor) integrator to generate an analog voltage. Frequency counts there, too. A faster signal will change the output voltage faster, but will be more sensitive to loads on the integrator output. Of course, a buffer amplifier on the integrator can easily take care of that.
PWM is a handy trick to have around. Many CPUs today have specialized PWM hardware that do all the work for you, but the effect is the same. You might wonder why I'm not using one of those. Well, I do have my reasons, but that's an issue for another day... | http://www.drdobbs.com/cpp/doing-your-duty-cycle/232301204 | CC-MAIN-2016-50 | refinedweb | 1,096 | 70.43 |
-
parallelizing an optimization phase
After some performance improvements to the optimizer [1] [2], I wanted to try for real parallelizing one of the optimization phases (say, one of the "easier" ones). The hypothesis was that typer was going to be the single source of races. Thus I picked an optimization phase (dead code elimination) that does most of its work without constantly invoking Tree.tpe or Symbol.info (relatively speaking). After synchronizing on global, the concurrent version takes on average 1/3 of the sequential one (no matter how many more threads are used, due to contention on global).
In principle, that approach could also do the trick for Inliner (to recap, Inliner is the largest contributor to compilation time under -optimize). With the caveats that Inliner is definitely more typer-reliant, and perhaps the best speedup achievable would be 1/2. All that seems worthy exploring in more depth. What you can do is:
(a) background material at The Scala Compiler Corner [3]
(b) sources for parallel DeadCodeElimination at
The idioms used in the parallel version really boil down to two:
(c) making mutable-shared-state not shared across threads (e.g., Linearizer and Peephole are now instance-level and thus not shared across tasks submitted to Executor)
(d) now comes the real synchronization. Say, Instruction.produced invokes typer (at least for some overrides). A working solution is:
private def getProduced(i: Instruction): Int = {
if(i.isInstanceOf[opcodes.CALL_METHOD]) {
global synchronized i.produced // locked because CALL_METHOD.produced() invokes producedType
} else i.produced
}
That's all there's to it. It can be tricky because (thinking aloud) -Ydebug or -Ylog might result in typer being invoked (I haven't tried this). However, the same principles apply.
Perhaps this prototype can evolve into a parallel version of dce in trunk (contributions are welcome!).
Miguel
[1]
[2]
[3]
Re: parallelizing an optimization phase
On Sun, Jan 22, 2012 at 3:27 AM, Miguel Garcia wrote:
> That's all there's to it. It can be tricky because (thinking aloud) -Ydebug
> or -Ylog might result in typer being invoked (I haven't tried this).
> However, the same principles apply.
Well, it's not supposed to. There was a time when you couldn't print
much of anything without hosing yourself. Now, AFAIK, you can quite
safely print symbols; printing trees may or may not be safe, not sure;
printing types is not safe, haven't looked into why.
Why can you print symbols?
scala> intp("scala.collection.parallel.mutable.ParMap").info.members
map (_.defString) >
def withDefaultValue: <?>
def withDefault: <?>
...
Because it is discriminating about it. Give the infos a little nudge,
see some more.
scala> intp("scala.collection.parallel.mutable.ParMap").info.members
foreach (_.info)
scala> intp("scala.collection.parallel.mutable.ParMap").info.members
map (_.defString) >
def withDefaultValue(d: <?>): scala.collection.parallel.mutable.ParMap[K,V]
def withDefault(d: <?>): scala.collection.parallel.mutable.ParMap[K,V]
All manifestations of the typer being invoked (or really, much of
anything being invoked) when calling toString should in my view be
considered bugs. If necessary there could be a flag somewhere which
is flipped into forcing mode when printing errors or wherever else it
is that one actually wants to influence the outcome by printing
something. But the default should be to affect nothing.
Re: parallelizing an optimization phase
How can I benchmark this particular method? There are some changes I'd
like to try out.
Oops, I gotta go. Let me quote SI-5045:
"When using a regular expression a user can state explicitly that they
want anchoring and the Regex library should not mandate that decision.
As an aside this often leads to people surrounding their patterns with
".*PTRN.*" which can turn a nice fast pattern into one with heavy
backtracking. See Mastering Regular Expressions by Friedl for
details."
And now let me quote the code:
/** The start of a scaladoc code block */
protected val CodeBlockStart =
new Regex("""(.*)((?:\{\{\{)|(?:\u000E
\u000E))(.*)""")
Short of adopting the suggestion made in the ticket, replacing .* with
.*? at the start might result in big gains.
On Thu, Jan 26, 2012 at 07:39, Miguel Garcia wrote:
>
>
I guess there's an interesting performance story to be told for `CommentFactory$class.parse0$1()` (now I got its name right). However, given that I'm not familiar with that code, all I've measured is the time it takes for DeadCodeElimination (rather, my parallel version of it) to analyze its ICode at compile time. I haven't look into what that ICode can be, nor its runtime impact. As far as parallel-dce is concerned, it's already doing its job: speeding up the thing as much as possible.
Miguel | http://www.scala-lang.org/node/12271 | CC-MAIN-2013-20 | refinedweb | 780 | 58.28 |
Guice 2.0 is almost there, but not quite
By sandoz on Aug 28, 2009
Guice 2.0 is a clean and well-thought-out framework for modularization and dependency injection. It is really good, but it could be really really good with a couple of tweaks.
James Strachan created GuiceyFruit in an effort to move Guice from "really good" to "really really good", or say from 95% there to 99% there. Some patches from GuiceFruit got accepted in Guice 2.0 but some are still pending so GuiceyFruit maintains a patched version of Guice 2.0 for it's own needs.
Jersey depends on GuiceyFruit and the patched Guice 2.0 for it's Guice support (although it is possible to use "native" Guice if you are not using any GuiceyFruit features).
Currently with Guice 2.0 i cannot integrate support for the binding of JAX-RS types and annotations. I would like to be able to do the the following with Guice/Jersey/JAX-RS:
@RequestScoped @Encoded public class MyResource { private final String q; @Inject public class MyResource(@DefaultValue("foo") @QueryParam("q") String q) { this.q = q; } }
But, currently as far as i am aware, it is not possible for two reasons:
- a provider binding does not have any context to what it is providing to; and
- an annotation that is not meta-annotated with @BindingAnnotation cannot be utilized as a binding annotation.
A good example of the complexity that results due to lack of support for 1) is presented in the section on Custom Injections of the Guice User's Guide. This section presents an example of how to support injection of a org.apache,log4j.Logger instance with fields annotated with the custom annotation @InjectLogger. The logger instance is initiated with the declaring class of the field.
The developer is required to implement a TypeListener that iterates through the fields of the class to find an appropriate field to inject a Logger, and a MembersInjector that injects the instance of the Logger onto the appropriate field.
While these interfaces are useful they are over-complex for such a use-case and the developer cannot use @Inject with Logger for constructor, method and field injection.
The injection of Logger can be reduced to a provider method of a module if Guice could support the injection of InjectionPoint:
@Provides public Logger providerLogger(InjectionPoint ip) { return Logger.getLogger(ip.getMember().getDeclaringClass()); }
With respect to the JAX-RS example a provider of the String type annotated with @QueryParam requires access to the annotations declared on the constructor parameter and the annotations declared on the defining class (it does not appear possible to get access to the former with InjectionPoint so that would require some tweaks as well).
Moving on to reason 2). If Guice supported the following binding in a module:
declareAnnotationAsBindingAnnotation(QueryParam.class); bind(String.class) .annotatedWith(QueryParam.class) .toProvider(QueryParamProvider.class);
then it would be possible for Jersey to supply a Jersey-specific Guice module that supported all the Jersey and JAX-RS annotations in addition to enabling developers to easily add their own providers for such annotations.
If Guice could do that it would be 99% there IMHO and would be a really really good dependency injection framework. | https://blogs.oracle.com/sandoz/tags/injection | CC-MAIN-2016-44 | refinedweb | 541 | 51.18 |
1. Introduction
This document will cover practical approach to White box Testing Techniques using Microsoft Visual Studio 2005 Test Team Suite. It covers concepts with a simple, easy to follow example.
As Microsoft Visual Studio 2005 Test Team Suite is new to the market, there is not enough and useful documentation for testing. So we are sharing our experience in this paper.
2. Topics Covered
3. What is White box testing?
White box testing is a technique for Unit Testing used to validate that a particular module of source code is working properly. The idea is to write test cases for all functions and methods so that whenever a change causes a regression, it can be quickly identified and fixed. Ideally, each test case is separate from the others. This type of testing is mostly done by the developers and not by end-users.
4. Benefits
The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. Unit testing provides a strict, written contract that the piece of code must satisfy. As a result, it affords several benefits.
4.1. Cheaper costThere is not additional cost for Team Test Suite. It is a component that comes along with the MS Visual Studio IDE.
4.2. Disciplined DevelopmentWell defined deliverable for the developer and more quantifiable progress. Often, just considering a test case will identify issues with an approach/design.4.3. Facilitates change and Reduces code fragilityUnit testing allows the programmer to refractor code at a later date, and make sure the module still works correctly (i.e. regression testing). 4.4. Simplifies integrationUnit testing helps eliminate uncertainty in the pieces themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts, integration testing becomes much easier.4.5. DocumentationUnit testing provides a sort of "living document". Clients and other developers looking to learn how to use the class can look at the unit tests to determine how to use the class to fit their needs and gain a basic understanding of the API.
4.6. Relative cost to fix defects graph
A high percentage of errors are caught in the review process:According to Fagan (1976) 67%,Jones (1977) 85%The earlier an error is caught the cheaper it is to fix.
5. White Box Testing Techniques Types
BS7925-2 lists all the white box test techniques
5.1. Why do we need White Box techniques?
5.2. Systematic techniques exist white box testing:
5.3. Designing test cases:
To plan and design effective cases requires a knowledge of the
6. Types
6.1. Statement Testing
Any test case with b TRUE will achieve full statement coverageNOTE: Full statement coverage can be achieved without exercising with b FALSE
6.2. Branch / Decision testing
Branch / Decision testing example
a;if (b) { c; }d;
Note: 100% statement coverage requires 1 test case (b = True) 100% branch / decision coverage requires 2 test cases: (b = True & b = False)
6.3. Cyclometric Complexity Technique:
A quantitative measure of the logical complexity of a program.
There are three algorithms for its calculation.V(G) = E - N + 2V(G) = Np + 1V(G) = RV(G) = the cyclometric complexityN = Number of flow graph nodesNp = Number of flow graph predicate nodesE = Number of flow graph edges R = Number of flow graph regions
Example:void Sort(int items[],int toPos) { int pos = 0, minPos = 1;
/* iterate over the vector */1 while( pos < toPos ) {2 minPos = pos + 1;
/* find the smallest item */3 while( minPos <= toPos ) {4 if(items[pos] < items[minPos]){ /* swap items */5 int temp = items[pos];6 items[pos] = items[minPos];7 items[minPos] = temp;8 } /*end if*/9 minPos++;10 } /*end while inner loop */11 pos++;12 } /*end while outer loop */ }V(G)= Cyclometric complexityN = Number of flow graph nodes =7Np = Number of flow graph predicate nodes =3A predicate node represents a point where flow may continue on two or more paths. E = Number of flow graph edges =9R = Number of flow graph regions =4
V(G) = E - N + 2 = 9 - 7 + 2 = 4V(G) = Np + 1 = 3 + 1 = 4V(G) = R = 4I.E. V(G) = 4
Flow Graph
7. Microsoft Visual Studio 2005 Overview
MSVS 2005 is an expansion of Visual Studio Professional including support for various roles in the development process.
Product goals:
7.1. Architecture
7.2. What is Team Test Suite?
With the latest release of Visual Studio Test System (VSTS) comes a full suite of functionality for Visual Studio Team Test (TT).
Team Test is a Visual Studio integrated unit-testing framework that enables:
In addition, Team Test includes a suite of testing capabilities not only for the developer, but the test engineer as well. E.g.: Manual testing
8. Hands-on ApproachIn this section, we are going to walk through how to create Team Test unit tests. We begin by creating a sample VB project, and then generating the unit tests for methods within that project. This will provide readers new to Team Test and unit testing with the basic syntax and code. It also provides a good introduction on how to quickly set up the test project structure. One of the key features of Team Test is the ability to load test data from a database and then use that data within the test methods. After demonstrating basic unit testing, we describe how to create test data and incorporate it into a test.
8.1. Pre Requisites
8.2. Steps involved
1. Open Visual Studio IDEClick Start-->Programs-->Microsoft Visual Studio 2005-->Microsoft Visual Studio 2005
2. Creating a Sample VB ProjectFile-->New Project-->Chose from Templates as below
3. Add a simple "add" method to the code. This method accepts 2 arguments, adds them and returns the total.
4. Create Unit Test for "add" method by right clicking on the method
5. This will display a dialog box for generating unit tests into a different project (see Figure below). By default, the Output project setting is for a new Visual Basic project, but C# and C++ test projects are also available. For this article, we will select as Visual Basic project click the OK button.6.
7. Now enter a Test project name of VSTSVBDemo.Test for the project name.
8. The generated test project contains following files related to testing.
9. In addition to some default files; the generated test project contains references to both the Microsoft.VisualStudio.QualityTools.UnitTestFramework and the SampleVB project, which the unit tests are executing against. The former is the testing framework assembly the test engine depends on when executing the unit tests. The latter is a project reference to the target assembly we are testing.
There are two important attributes related to testing with Team Test. First, the designation of a method as a test is through the TestMethodAttribute attribute. In addition, the class containing the test method has a TestClassAttribute attribute.
Both of these attributes are found in the:
Microsoft.VisualStudio.QualityTools.UnitTesting.Framework namespace.Team Test uses reflection to search a test assembly and find all the TestClass decorated classes, and then find the corresponding TestMethodAttribute decorated methods to determine what to execute.One other important criterion, validated by the execution engine but not the compiler, is that the test method signature be an instance method that takes no parameters.The test method AddTest() instantiates the target SampleVB class before checking for assertion.When the test is run, the Assert.Inconclusive() provides an indication that it is likely to be missing the correct implementation.
10. Providing input values to test method
Updated addTest() Implementation after editing to provide input values
Note that the checks are done using the Assert.AreEqual() method.
11. Running Tests:To run all tests within the project simply requires running the test project. To enable this action, we need to:Right-click on the VSTSVBDemo.Test project within the solution explorer and click Set as Startup Project.Next, use the Debug->Start (F5) or Debug->Start Without Debugging (Ctrl+F5) menu items to begin running the tests.
To view additional details on the test, we can double click on it to open up the AddTest() Results:
Please note that the message in the assert statement does not appear for a "passing" test. It appears only when a test fails along with "Expected" and "Actual" values.
Lets assume that by mistake(even though its highly unlikely) we have put total=x*y instead of x+y in source code file. Let's try to run the same test and see the results:
Yes, as expected, test fails. Please read the results above for details.
12. Loading Test data from Database:
We are loading test data from SQL server database as below:
In Test View, right Click on AddTest()-->Properties-->Chose Data Connection String, Data Table Name and Data Access method as sequential or random.
This adds a line before test method as below:<DataSource("System.Data.SqlClient", "Data Source=SUPRABHAT;Initial Catalog=I3LEMPLOYEES;User ID=userid;Password=" ", "TestTable1", DataAccessMethod.Sequential)> <testMethod> Public sub Addtest()
The Test Data is loaded from a Data table called: "TestTable1"Here is the sample data in TestTable1:
Value1 Value2 1 10 22 25 589 236 56 202 45 56 879 56313. Associating data with AddTest()
Within test method, change x and y as below: Dim x As Integer = convert.ToInt32(testContextInstance.DataRow("Value1")Dim y As Integer = convert.ToInt32(testContextInstance.DataRow("Value2")
The important characteristic here is the TestContext.DataRow calls. TestContext is a property that the generator provided when we Created Tests.The test execution engine at runtime automatically assigns the property. so that within a test we can access data associated with the test context. Now Run the testmethod. Please note results are available for each data row as below:
14. Code Coverage
9. Testing Private Methods
To generate a unit test for a private method
If the signature for a private method changes, you must update the unit test that exercises that private method.
Regenerate Private Accessors
10. Manual Testing
A manual test is a description of test steps that a tester performs. You use manual tests in situations where other test types, such as unit tests or Web tests, would be too difficult or too time consuming to create and run. You would also use a manual test when the steps cannot be automated, for example to determine a component's behavior when network connectivity is lost; this test could be performed by manually unplugging the network cable.
10.1. Creating Manual Tests and mapping them to MSVSTS
10.2. To author a manual test
10.3. To execute a manual test
11. Limitations of unit testing with MSVSTS 2005
11.1. Database operations testing: two approaches
The first is to replace the database API that your DAL is talking to with an object of your own creation that mimics the database's interface, but does not actually perform any tasks. This method is often referred to as "mocking" or "stubbing.
The second testing method involves your DAL calling the real database behind the scenes. This usually means that you are performing state-based testing.
11.2. Using mock objects
11.3. Using state based testing approach
12. Best Practices
13. Conclusion
Overall, the VSTS unit testing functionality is comprehensive on its own. Furthermore, the inclusion of code coverage analysis is invaluable to evaluating how comprehensive the tests are. By using VSTS, you can correlate the number of tests compared to the number of bugs or the amount of code written. This provides an excellent indicator into the health of a project.
This paper provides not only an introduction to the basic unit testing functionality in the Team Test Suite, but also delves into some of the more advanced functionality relating to data driven testing. By beginning the practice of unit testing your code, you will establish a suite of tests that will prove invaluable throughout the life of a product. Team Test makes this easy with its strong integration into Visual Studio and the rest of the VSTS product line.
View All | http://www.c-sharpcorner.com/article/a-practical-approach-to-net-testing-using-visual-studio-200/ | CC-MAIN-2017-09 | refinedweb | 2,032 | 55.03 |
SD Dissatisfied with gcc... why?
Posted Nov 17, 2008 17:25 UTC (Mon) by pr1268 (subscriber, #24648)
[Link]
Just curious, what exactly is BSD's dissatisfaction with GCC? I'm not saying that the compiler world begins and ends with GCC, but it certainly commands higher respect and market share amongst the non-Microsoft and Sun/Solaris computing communities (but I've used GCC with success on these platforms as well).
Is this just more BSD v. GNU/Linux politics (SSH comes to mind here)?
Posted Nov 17, 2008 17:29 UTC (Mon) by rsidd (subscriber, #2582)
[Link]
Posted Nov 17, 2008 17:48 UTC (Mon) by pr1268 (subscriber, #24648)
[Link]
My thoughts on SSH (and my comment above) were influenced by some flame wars I read about (the Broadcom BSD driver row in particular). I probably should have specified OpenSSH. However, I really don't mean to dredge up reminders of how BSD v. GNU/Linux politics can get ugly.
Thanks for reminding me of the licensing incompatibility--that puts things in a better perspective.
Posted Nov 17, 2008 18:55 UTC (Mon) by mheily (subscriber, #27123)
[Link]
Posted Nov 17, 2008 22:14 UTC (Mon) by EmbeddedLinuxGuy (guest, #35019)
[Link]
Posted Nov 17, 2008 22:54 UTC (Mon) by nix (subscriber, #2304)
[Link]
I consider this utter lunacy, but if they want to waste their time to that
extent, well, it's their time to waste.
Posted Nov 17, 2008 23:55 UTC (Mon) by im14u2c (subscriber, #5246)
[Link]
LLVM's license reads an awful lot like the BSD-with-advertising-clause, just with UIUC's name subbed in.
I suspect it's not just license.
I seem to recall a big part of the attractiveness of PCC was that it was very simple and fast. Advanced transformations and optimizations have greater likelihood of breaking a complex program. Aggressive optimizations push the semantics of the C language quite a bit, such that it's hard to write something like a kernel. Lately, it seems like nearly every major GCC release seems to break something subtly somewhere that was relying on a stronger guarantee than is offered by the standard.
Posted Nov 18, 2008 0:33 UTC (Tue) by dlang (subscriber, #313)
[Link]
this makes a simple compiler very attractive (at least in theory)
Posted Nov 18, 2008 8:18 UTC (Tue) by dgm (subscriber, #49227)
[Link]
The real problem with GCC is that it's overly complex (and slow) for the quality of optimizations it provides.
Posted Nov 18, 2008 0:42 UTC (Tue) by qg6te2 (subscriber, #52587)
[Link]
"Stronger guarantee than offered by the standard"? I don't think such a thing exists. Not following standards is a very slippery path. If one writes code that does not follow the standard but rather some bastardised version of it, the resulting code is by default non-portable and likely to break across compilers (even on the same architecture). I've seen non-portable trickery where the entire point was to get a 5% speedup on a particular version of a compiler.
Posted Nov 18, 2008 1:49 UTC (Tue) by im14u2c (subscriber, #5246)
[Link]
Granted, since I follow Linux much, much more closely than BSD, I tend to hear it from the Linux kernel guys. Here's a thread that captures what I'm talking about.
Nice example...
Posted Nov 18, 2008 17:24 UTC (Tue) by daney (subscriber, #24551)
[Link]
There was some initial contention, but once the situation was well understood, the desired results were obtained. There are some who argue based on the axiom that GCC == BAD, but I don't think it holds in the case you mention.
Posted Nov 18, 2008 17:38 UTC (Tue) by im14u2c (subscriber, #5246)
[Link]
More aggressive optimizations will rely on this wiggle room and sometimes break things. That's a headache for kernel developers. Sure, GCC may get fixed, but breaking to begin with was an annoyance. If the break causes subtle problems, diagnosing the issue could be very difficult.
This is where a simpler compiler can be more effective. If it provides very simple semantics (rather than the extraordinary wiggle room the standard provides), it becomes easier to reason about the correctness of the program. Yes, it's less portable to other compilers, but as long as the compiler itself is portable, what's the issue?
After all, you don't see many Linux builds that don't use GCC (although there are a few...).
Posted Nov 18, 2008 19:07 UTC (Tue) by dlang (subscriber, #313)
[Link]
defining this area is one of the bigger changes in the new POSIX, C, and C++ standards that are nearing completion (POSIX is complete, C is expected next year, C++ sometime after that)
Posted Nov 18, 2008 10:42 UTC (Tue) by etienne_lorrain@yahoo.fr (subscriber, #38022)
[Link]
Even when the standards are very unclear, like volatile structure of bitfields considered by GCC (in C) as structure of volatile bitfields, resulting in reads of volatiles on an "C" whole structure write?
Moreover, I still do not understand how in C++ I am supposed to use volatile integers - when I want to write a non volatile integer to it or read it into a non volatile integer - i.e. the basic use of volatile (register mapped) integers. Obviously I do not want warnings or a cast of my volatile register mapped integer into non volatile...
Posted Nov 19, 2008 14:36 UTC (Wed) by dgm (subscriber, #49227)
[Link]
Wrong. I think you were thinking of the "register" keyword.
"Volatile" means that the contents of a variable can be changed externally at any time. Think, for instance, of a memory mapped hardware register. Basically it instructs the compiler to avoid certain optimizations based on knowledge of the previous value, and forces it to read it every time.
Posted Nov 19, 2008 20:57 UTC (Wed) by nix (subscriber, #2304)
[Link]
Posted Nov 20, 2008 10:34 UTC (Thu) by etienne_lorrain@yahoo.fr (subscriber, #38022)
[Link]
No, I was and am still talking of "volatile".
The problem is that the C++ standard treats all attributes the same way, and gives examples with "const".
I do understand that to overwrite the "const" attribute, I need to do a dirty cast to non-const, but the basic use of volatile is to copy them into standard variables, at a controled place of the software:
volatile unsigned ethernet_status;
/* -> address of ethernet_status defined in the linker file */
int fct(void) {
unsigned status = ethernet_status; // single read of "ethernet_status"
return ((status & 1) || ((status & 2) ^ (status & 4)));
}
Now tell me how to put ethernet_status and fct() into a class and compile without warning and without casting ethernet_status to non-volatile...
But the worst for me is still considering a volatile structure of bitfields as a structure of volatile bitfields, even if I see no problem to consider a constant structure of bitfields as a structure of constant bitfields...
For instance:
volatile struct color_s { char red, green, blue, transparent; } volcolor;
int fct (void) {
struct color_s color = volcolor;
return color.red == color.blue;
}
Because volcolor is considered as a structure of volatile fields, volcolor is read twice (two byte access) when optimising.
Posted Nov 18, 2008 0:33 UTC (Tue) by EmbeddedLinuxGuy (guest, #35019)
[Link]
Posted Nov 18, 2008 15:10 UTC (Tue) by nix (subscriber, #2304)
[Link]
Posted Nov 17, 2008 18:20 UTC (Mon) by endecotp (guest, #36428)
[Link]
follow the link the the last LWN story.
Posted Nov 17, 2008 22:29 UTC (Mon) by DonDiego (subscriber, #24141)
[Link]
pcc seeks contributions to reach 1.0 milestone
Posted Nov 17, 2008 20:03 UTC (Mon) by robert_s (guest, #42402)
[Link]
Posted Nov 17, 2008 22:12 UTC (Mon) by nix (subscriber, #2304)
[Link]
Posted Nov 19, 2008 20:46 UTC (Wed) by nix (subscriber, #2304)
[Link]
Posted Nov 19, 2008 11:10 UTC (Wed) by ragge (guest, #55254)
[Link]
Relying on Caldera?
Posted Nov 17, 2008 21:06 UTC (Mon) by dmarti (subscriber, #11625)
[Link]
Posted Nov 18, 2008 0:50 UTC (Tue) by JoeBuck (subscriber, #2330)
[Link]
Here's a copy of the announcement, which for some odd reason you'll no longer find on SCO's site.
Given subsequent events, perhaps SCO can argue that their predecessor company had no right to do this, because Novell, not they, own Unix. But to say so would undercut their own argument in their legal troubles with Novell.
pcc has been making good progress
Posted Nov 17, 2008 21:57 UTC (Mon) by roskegg (subscriber, #105)
[Link]
The pcc project is small enough that a regular guy can wrap his head around it, whereas gcc takes a lot of effort.
Ragge deserves our support, and we can only benefit from pcc become better. It is already usable.
Someone here said pcc suffers from ancient bad design decisions. What would those be? The codebase is almost entirely new, very little of the original code remains.
Posted Nov 18, 2008 0:53 UTC (Tue) by robert_s (guest, #42402)
[Link]
Exactly - so they're essentially scratchwriting a compiler, it's just everyone's afraid of saying that for some reason. Perhaps because it betrays the mammoth nature of the task.
Posted Nov 19, 2008 8:36 UTC (Wed) by ragge (guest, #55254)
[Link]
Posted Nov 18, 2008 3:55 UTC (Tue) by RandyBolton (subscriber, #6186)
[Link]
Bringing PCC into The 21th century
by Anders Magnusson
October 11, 2008
Good link
Posted Nov 18, 2008 7:59 UTC (Tue) by roskegg (subscriber, #105)
[Link]
I hope he finishes pdp10 support at some point, and that NetBSD and OpenBSD get ported to the PDP10. It would be a fun honeypot to leave running.
Posted Nov 18, 2008 16:03 UTC (Tue) by nix (subscriber, #2304)
[Link]
But if you want to get decent performance on modern hardware you *need*
things like speculative code motion (to avoid pipeline stalls), and if you
want that then you need dataflow analysis and I can see no way they can do
that with PCC without turning it into something quite different.
So perhaps they plan for this to stay forever in the ghetto of 1980s and
pre-1980s hardware? I'm not sure, but it hardly seems like an exciting
growth area to me.
Posted Nov 19, 2008 7:28 UTC (Wed) by ragge (guest, #55254)
[Link]
Currently the code generated is between 0-10% slower than code generated by gcc, and most of the time "lost" is due to not-yet added optimizations. Still, the compiler runs around 15 times faster than gcc.
Posted Nov 18, 2008 16:00 UTC (Tue) by nix (subscriber, #2304)
[Link]
Posted Nov 19, 2008 7:34 UTC (Wed) by ragge (guest, #55254)
[Link]
If you have a compiler with a reasonable internal design, none of the standard optimizing algorithms are especially difficult to add (like SSA conversion, graph-coloring register allocator, strength reduction etc.)
Posted Nov 19, 2008 8:36 UTC (Wed) by nix (subscriber, #2304)
[Link]
Posted Nov 20, 2008 0:16 UTC (Thu) by bboissin (subscriber, #29506)
[Link]
Posted Nov 20, 2008 7:51 UTC (Thu) by nix (subscriber, #2304)
[Link]
Posted Nov 20, 2008 16:49 UTC (Thu) by bboissin (subscriber, #29506)
[Link]
Coalescing is done during the Out-of-SSA pass (and it would be very inneficient to do the Out-of-SSA by replacing each phi with a move), if you're in CSSA form, just replace every phi-related variable with a unique variable and that's all (so the hard part is the conversion to CSSA).
As for the IR, I believe SSA can really influence your IR, so it's not just a property (you need parallel moves, etc). Libfirm does that very nicely from what I've seen.
Posted Nov 20, 2008 22:36 UTC (Thu) by nix (subscriber, #2304)
[Link]
Posted Dec 4, 2008 17:15 UTC (Thu) by salimma (subscriber, #34460)
[Link]
Posted Dec 5, 2008 0:44 UTC (Fri) by nix (subscriber, #2304)
[Link]
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Articles/307437/ | crawl-002 | refinedweb | 2,014 | 63.53 |
A basic reporting chart in ASP.NET
It is time to learn to write some charts. By charts I mean graphic views for reporting on data.
Obtaining MSChart
For .NET 4, MSChart is included in the .NET Framework, so if you have installed .NET 4, you have already obtained MSChart.
For .NET 3.5, the MSChart project which was an add-on. If you are using .NET 3.5, you need to download and install the add-on.
Note: I am using .NET 4 and it was installed with Visual Studio 2010, so I have no need to install the add-on.
Also, we are going to the very minimal steps manually. Many of these steps may be done for you (for example, the Visual Studio Designer will populate the Web.Config for you, but it is always good to know how do things yourself.
Report Example Use Case
Imagine you have sales trending for four years, 2009-2012, and you want to visualize this trend. You want a chart that should all four years, with the quarter results next to each other.
Download the project here: SampleChart.zip
Step 1 – Create the Visual Studio project
- In Visual Studio, click on File | New | Project.
- Select Visual C# | Web from the Installed Templates.
- Locate and select ASP.NET Empty Web Application.
Note: I like to demonstrate using an Empty project you nothing is done for you, and you have to learn everything you actually need to do.
- Give the project a name.
- Click OK.
- Right-click on the newly created project and click Add | Reference.
- Select the .NET tab.
- Locate System.Web.DataVisualization and highlight it.
- Click OK.
Step 2 – Add a web form for your chart
- Right-click on the Project and choose Add | New Item.
- Select Web Form.
- Give the file a name.
I named my file Report.aspx.
- Click OK.
Step 3 – Create a data object for the report
- Right-click on the Project and choose Add | Class.
- Give the file a name.
I named my file Data.cs.
- Click OK.
Step 4 – Add example data to the data object for the report
While in a real world scenario, you would get the data from a database or somewhere, lets first just create some sample data.
- Create a few lists of numbers, one for each year as shown.
namespace CompareYearsByQuarter { public class Data { public int[] Sales2009 = new int[] { 47, 48, 49, 47 }; public int[] Sales2010 = new int[] { 47, 50, 51, 48 }; public int[] Sales2011 = new int[] { 50, 52, 53, 46 }; public int[] Sales2012 = new int[] { 53, 54, 55, 49 }; } }
That is it, your fake example data is prepared.
Step 5 – Add a Chart to the Report.aspx file
- Open the Report.aspx file.
- Add a Register to the System.Web.DataVisualization assembly.
- Locate the div inside the body.
- Inside the div, add a Chart that includes a ChartArea.
<%@ Page <html xmlns=""> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:Chart <chartareas> <asp:ChartArea </asp:ChartArea> </chartareas> </asp:Chart> </div> </form> </body> </html>
Step 6 – Add code the Report.aspx.cs file
- Open the Report.aspx.cs file.
- Create an instance of the Data object that has our sample data.
- Add code in the Page_Load method to configure the Chart a separate series of data for each year.
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.DataVisualization.Charting; namespace CompareYearsByQuarter { public partial class Report : System.Web.UI.Page { Data data = new Data(); protected void Page_Load(object sender, EventArgs e) { Series year2009 = new Series("Sales 2009"); // Populate new series with data foreach (var value in data.Sales2009) { year2009.Points.AddY(value); } SalesReport.Series.Add(year2009); Series year2010 = new Series("Sales 2010"); // Populate new series with data foreach (var value in data.Sales2010) { year2010.Points.AddY(value); } SalesReport.Series.Add(year2010); Series year2011 = new Series("Sales 2011"); // Populate new series with data foreach (var value in data.Sales2011) { year2011.Points.AddY(value); } SalesReport.Series.Add(year2011); Series year2012 = new Series("Sales 2012"); // Populate new series with data foreach (var value in data.Sales2012) { year2012.Points.AddY(value); } SalesReport.Series.Add(year2012); } } }
Step 7 – Add an http handler to the Web.Config for the Chart
- Open the Web.Config file.
- Add an http handler for the chart.
<?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <httpHandlers> <add path="ChartImg.axd" verb="GET,HEAD,POST" validate="false" type="System.Web.UI.DataVisualization.Charting.ChartHttpHandler, System.Web.DataVisualization, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> </httpHandlers> </system.web> </configuration>
You are done. Build and look at your report.
You now have a simple report that should show you the sales trend for quarters 1,2,3,4 over four years. Your chart should look like this.
Hi,
I need to create a organization chart using MS Charts and ASP.NET/ C#, Can you please help me to create an organization chart?
Thanks in advance.
Regards
Saravanan K | https://www.rhyous.com/2012/07/17/a-basic-reporting-chart-in-asp-net/ | CC-MAIN-2018-09 | refinedweb | 843 | 69.48 |
This chapter covers the drawing functions that are provided with FLTK.
There are only certain places you can execute drawing code in FLTK. Calling these functions at other places will result in undefined behavior!
To use the drawing functions you must first include the <FL/fl_draw.H> header file. FLTK provides the following types of drawing functions:
FLTK
results of calling
fl_frame() with a string that is
not a multiple of 4 characters in length are undefined.
The only difference between this function and
fl_frame2() is the order of the line segments.
See also: fl_frame boxtype.
The
fl_frame2() function.
Replace the top of the clip stack with a clipping region of any shape. Fl_Region is an operating system specific type. The second form returns the current clipping region.
FLTK manages colors as 32-bit unsigned integers. Values from 0 to 255 represent colors from the FLTK 1.0.x standard colormap and are allocated as needed on screens without TrueColor support. The Fl_Color enumeration type defines the standard colors and color cube for the first 256 colors. All of these are named with symbols in <FL/Enumerations.H>.
Color values greater than 255 are treated as 24-bit RGB values. These are mapped to the closest color supported by the screen, either from one of the 256 colors in the FLTK.
FLTK FLTK widgets. They draw on exact pixel boundaries and are as fast as possible. Their behavior is duplicated exactly on all platforms FLTK.clock w - 1 and h - 1.
Scroll a rectangle and draw the newly exposed portions. The contents of the rectangular area is first shifted by dx and dy pixels. The callback is then called for every newly exposed rectangular area,
The complex drawing functions let you draw arbitrary shapes with 2-D linear transformations. The functionality matches that found in the Adobe® PostScriptTM language. The exact pixels that are filled are less defined than for the fast drawing functions so that FLTK.
Transform a coordinate or a distance trough the current transformation matrix. After transforming a coordinate pair, it can be added to the vertex list without any forther translations using fl_transformed_vertex.
Start and end drawing a list of points. Points are added to the list with fl_vertex.,y and x3,y3.
Add a series of points to the current path on the arc of a circle; you can get elliptical paths by using scale and rotate before calling fl_arc(). x,y are the center of the circle, and r.
Fancy string drawing function which is used to draw all the labels. The string is formatted and aligned inside the passed box. Handles '\t' and '\n', expands all other control characters to ^X, and aligns inside or against the edges of the box described by x, y, w and h. See Fl_Widget::align() for values.
The text length is limited to 1024 caracters per line.
Measure how wide and tall the string will be when printed by the fl_draw(...align) function. If the incoming w is non-zero it will wrap to that width..
FLTK.
FLTK 1 supports western character sets using the eight bit encoding of the user-selected global code page. For MS Windows and X11, the code page is assumed to be Windows-1252/Latin1, a superset to ISO 8859-1. On Mac OS X, we assume MacRoman.
FLTK provides the functions fl_latin1_to_local, fl_local_to_latin1, fl_mac_roman_to_local, and fl_local_to_mac_roman to convert strings between both encodings. These functions are only required if your source code contains "C"-strings with international characters and if this source will be compiled on multiple platforms.
Assuming that the following source code was written on MS Windows, this example will output the correct label on OS X and X11 as well. Without the conversion call, the label on OS X would read Fahrvergn¸gen with a deformed umlaut u.
btn = new Fl_Button(10, 10, 300, 25); btn->copy_label(fl_latin1_to_local("Fahrvergnügen"));
If your application uses characters that are not part of both encodings, or it will be used in areas that commonly use different code pages, yoou might consider upgrading to FLTK 2 which supports UTF-8 encoding.
These functions allow you to draw interactive selection rectangles without using the overlay hardware. FLTK FLTK,Y are where to put the top-left corner. W and H define the size of the image. D is the delta to add to the pointer between pixels, it may be any value greater or equal to 3, or it can be negative to flip the image horizontally. LD is the delta to add to the pointer between lines (if 0 is passed it uses W * D), and may be larger than W * D to crop data, or negative to flip the image vertically.
It is highly recommended that you put the following code before the first show() of any window in your program to get rid of the dithering if possible:
Fl::visual(FL_RGB);.
Call the passed function to provide each scan line of the image. This lets you generate the image as it is being drawn, or do arbitrary decompression of stored data, provided it can be decompressed to individual scan lines easily.
The callback is called with the void * user data pointer which can be used to point XPM image data, with the top-left corner at the given position. The image is dithered on 8-bit displays so you won't lose color space for programs displaying both images and pixmaps. This function returns zero if there was any error decoding the XPM data.
To use an XPM, do:
#include "foo.xpm" ... fl_draw_pixmap(foo, X, Y);
Transparent colors are replaced by the optional Fl_Color argument. To draw with true transparency you must use the Fl_Pixmap class.
An XPM image contains the dimensions in its data. This function finds and returns the width and height. The return value is non-zero if the dimensions were parsed ok and zero if there was any problem.
FLTK provides a single function for reading from the current window or off-screen buffer into a RGB(A) image buffer.
Read a RGB(A) image from the current window or off-screen buffer. The p argument points to a buffer that can hold the image and must be at least W*H*3 bytes when reading RGB images.
FLTK.
fl_can_do_alpha_blending() will return 1, if your platform supports true alpha blending for RGBA images, or 0, if FLTK will use screen door transparency.
FLTK, FLTK,y,w,h indicates a destination rectangle. ox,oy,w,h is a source rectangle. This source rectangle is copied to the destination. The source rectangle may extend outside the image, i.e. ox and oy may be negative and w and h may be bigger than the image, and this area is left unchanged.
Draws the image with the upper-left corner at x,y. This is the same as doing draw(x,y,img->w(),img->h(),0,0).
Create an RGB offscreen buffer with w*h pixels.
Delete a previously created offscreen buffer. All drawings are lost.
Send all subsequent drawing commands to this offscreen buffer. FLTK can draw into a buffer at any time. There is no need to wait for an Fl_Widget::draw() to occur.
Quit sending drawing commands to this offscreen buffer.
Copy a rectangular area of the size w*h from srcx, srcy in the offscreen buffer into the current buffer at x, y. | http://fltk.org/doc-1.1/drawing.html | crawl-002 | refinedweb | 1,237 | 66.03 |
In the last two articles about CRUD and HTML (CODE Magazine, November/December 2015 and January/February 2016), you created a product information page to display a list of product data returned from a Web API. In addition, you built functionality to add, update, and delete products using the same Web API controller. In those two articles, there was a very basic exception handler function named handleException. This function displays error information returned from the API in an alert dialog on your page, as shown in Figure 1.
In this article, I’ll expand upon the IHttpActionResult interface you learned about in the last article (CODE Magazine, January/February 2016). You’ll learn to determine what kind of error is returned from your Web API call and display different messages based on those errors. You’ll learn to how to return 500, 404, and 400 exceptions and how to handle them on your Web page.
Basic Exception Handling in the Web API
If you remember from the last article, the Web API 2 provides a new interface called IHttpActionResult that you use as the return value for all your methods. This interface is built into the ApiController class (from which your ProductContoller class inherits) and defines helper methods to return the most common HTTP status codes such as a 202, 201, 400, 404, and 500. There are a few different methods built-in to the MVC Controller class that you assign to your IHttpActionResult return value. These methods and the HTTP status code they return are shown in the following list.
- OK: 200
- Created: 201
- BadRequest: 400
- NotFound: 404
- InternalServerError: 500
Each method in your controller should return one of the codes listed above. Each of these methods implements a class derived from the interface IHttpActionResult. When you return an OK from your method, a 200 HTTP status code is returned, which signifies that no error has occurred. The Created method, used in the Post method, returns a 201 which means that the method succeeded in creating a new resource. The other three return values all signify that an error has occurred, and thus call the function handleException specified in the error parameter of your Ajax. A typical Ajax call is shown below.
function productList() { $.ajax({ url: '/api/Product/', type: 'GET', dataType: 'json', success: function (products) { productListSuccess(products); }, error: function (request, message, error) { handleException(request, message, error); } }); }
The next snippet is the handleException function from the January/February article. This function extracts different properties from the request parameter and displays all of the values in an alert dialog. A newline-delimited string displays each value—Code, Text, and Message—on a separate line in the alert dialog. There’s no distinction in this function between 400, 404, or 500 errors.
function handleException(request, message, error) { var msg = ""; msg += "Code: " + request.status + "\n"; msg += "Text: " + request.statusText + "\n"; if (request.responseJSON != null) { msg += "Message: " + request.responseJSON.Message + "\n"; } alert(msg); }
BadRequest: 400 Error
A 400 error, BadRequest, is used when you have validation errors from data posted back by the user. You pass a ModelStateDictionary object to the BadRequest method and it converts that dictionary into JSON which, in turn, is passed back to your HTML page. Within the new handleException function, you’re going to write, parse the dictionary of validation errors, and display the messages to the user.
NotFound: 404 Error
A 404 error is generated by a Web server when a user requests a page, or other resource, that is not valid on that Web server. When writing your Web API, use a 404 to generate an error to inform the programmer calling your API that they have requested a piece of data that’s not available in your data store. For example, they may pass the following request to your Get method; api/Product/999. If there is no product with a primary key of 999 in your database, return a 404 error to the HTML page so the programmer can inform the user that they requested data that isn’t available.
InternalServerError: 500 Error
I’m sure that you’ve seen the dreaded 500 error come back from your Web application at some point or another while developing your application. A 500 error is returned when there’s an unhandled exception in your application. You should always handle all of your exceptions yourself, but you still might wish to pass back a 500 to the HTML page. If you do, I suggest that you supply as much information about why a 500 is being returned as you can. Don’t just let the .NET Framework dump their generic, and often unhelpful, error message back to the consumer of your Web API.
Unhandled Exceptions
To illustrate what happens when you have an unhandled exception in your Web API, open the ProductController and locate the Get method that returns a list of all products. Simulate an unhandled exception by adding the following lines of code at the top of this method.
[HttpGet()] public IHttpActionResult Get() { int x = 0; int y = 10; int z = y / x; // ... Rest of the code is here }
When you run the Web page and it tries to load all of the product data into the table on the Web page, a 500 error is sent back to the client. NOTE: You might need to turn off the "Break when an exception is User Unhandled" from the Debug->Exceptions menu prior to running this code.
Modify the handleException function (Listing 1) on your Web page to handle a 500 exception. Add a switch…case statement to the function. The first case you add is 500. You’ll add more cases later in this article. When a 500 exception is received, grab the ExceptionMessage property from the responseJSON property on the request parameter. The ExceptionMessage property contains the message from the .NET Exception object. For the case of the code you added to the controller, the message is "Attempted to divide by zero," as shown in Figure 1.
After running this code and seeing the error, be sure to remove the three lines that cause the error so you can move on and try out the rest of the error handling in this article.
Return InternalServerError for Handled Exceptions
Instead of letting the divide-by-error exception go unhandled, let’s explicitly return a 500 exception using the InternalServerError method. Open the ProductController.cs file and locate the Get(int id) method. Add the same three lines within a try…catch block, as shown in Listing 2, to simulate an error. Create two catch blocks: one to handle a DivideByZeroException and one to handle a generic Exception object. By catching an explicit DivideByZeroException object, you can pass back a more specific error message to the HTML page.
After writing this code, run the application and click on any of the Edit buttons in the HTML table. You should now see the custom message that you wrote appear in the alert dialog on the page. Once again, after you see the error appear and you’re satisfied that it’s working correctly, remove the three lines of code so you can move on with the rest of the article.
Handling Validation Errors with BadRequest
As you know, you can’t trust user data. Even if you have jQuery validation on your page, you can’t guarantee that the validation ran. Thus, you need to perform validation of your data once it’s posted back to your Web API. There are many ways to validate your data once it’s posted back. One of the most popular methods today is to use Data Annotations. If you’re using the Entity Framework (EF), data annotations are added to the entity class automatically. I’m not going to cover data annotations in this article, as I assume that you already know how to use those. If you don’t, there are many articles available on the usage of data annotations.
You may find cases where you just can’t accomplish the validation you need using data annotations. In those cases, perform data validation using traditional C# code. Create a ModelStateDictionary to hold your validation error messages, or add to the ModelStateDictionary returned from the EF engine. Open the ProductController.cs and add the following Using statement at the top of the file.
using System.Web.Http.ModelBinding;
You need to create an instance of a ModelStateDictionary class to hold your validation messages. Create a private field, named ValidationMessages, in the ProductController class.
private ModelStateDictionary ValidationMessages { get; set; }
Build a method, named Validate(), to check the product data passed into your Web API from an HTML page. Add a couple of checks for business rules, as shown in the code snippet below. Yes, these simple business rules can be done with data annotations, but I want to show you how to add custom rules.
private bool Validate(Product product) { ValidationMessages = new ModelStateDictionary(); if (string.IsNullOrWhiteSpace( product.ProductName)) { ValidationMessages.AddModelError( "ProductName", "Product Name must be filled in."); } if (string.IsNullOrWhiteSpace( product.Url)) { ValidationMessages.AddModelError("Url", "URL must be filled in."); } return (ValidationMessages.Count == 0); }
In the Add method you wrote in the last article, you added the new product data to the collection of products. Modify the Add method to call the Validate method you just wrote to see if all of the business rules passed prior to adding the product to the list (see Listing 3).
Now that you have the Add method calling the new Validate method, modify the Post method, as shown in Listing 4. If the Add method returns a false value, write code in the else statement to return a BadRequest with the collection of validation messages generated by the Validate method.
The ValidationsMessages property, which is a ModelStateDictionary object, gets serialized as JSON when it’s sent back to the front end. In your HTML page you’ll need to extract the error messages. The easiest way is to create a function, named getModelStateErrors, as shown in Listing 5.
This method calls JSON.parse to convert the JSON returned from the Web API into a JSON object that contains an array in a property named ModelState. You loop through the array and extract each error message from the object and add that message to a new array named errors. This array is returned from this function and will be used to display the errors on the Web page.
Open your Index.cshtml page and locate the handleException function. Add a new Case statement to handle these validation errors. What you add to the handleException function is shown in the next code snippet.
case 400: // 'Bad Request' means we are throwing back // model state errors var errors = []; errors = getModelStateErrors( request.responseText); for (var i = 0; i < errors.length; i++) { msg += errors[i] + "\n"; } break;
In this Case statement, call the getModelStateErrors function passing in the request.responseText that contains the JSON string returned from the Web API. Loop through the array of errors returned, and place a new line character at the end of each one, and concatenate to the variable msg. This msg variable is then displayed in the alert.
In this article, I’m appending error messages together to display in an alert dialog box. Feel free to modify this code to display the messages in any fashion you want. For instance, you might choose to create a bulleted list in a Bootstrap well to which you add the error messages. Or, you might display them in a Bootstrap pop-up dialog box. You’re only limited by your own imagination.
Handling Data Not Found
When working in a multi-user environment, it’s possible for one user to delete a record, but another wants to view or update that record after it’s already been deleted. In that case, you need to inform the second user that the record they were trying to locate could not be found. You do this using returning the 404 status code using the method NotFound, as shown in Listing 6. This is the same Get(int id) method shown earlier in this article, except that I’ve removed the catch block for the DivideByZeroException. If you search for the value contained in the ID parameter in the product list and you don’t find it, set the ret return variable to the object returned from the NotFound method.
Modify the handleException function in your Index.cshtml page to handle this new 404 value, as shown in the code snippet below. Because you can’t pass anything back from the NotFound method in your Web API, you need to add your own message on the page to specify exactly what could not be found. In this case, you inform the user that the product they requested could not be found.
case 404: // 'Not Found' means the data you are // requesting can not be found in // the database msg = "The Product you were requesting could not be found"; break;
Summary
In this article, you learned how to use the IHttpActionResult methods implemented in the .NET Controller class to return a specific HTTP status exception to the Web page that called the API. Using HTTP status codes allows you to add a case statement in your JavaScript to provide unique error messages to your user. I also recommend that, on the server-side, you log exceptions prior to returning an error to the front end. | https://www.codemag.com/Article/1603031/Handling-Exceptions-Returned-from-the-Web-API | CC-MAIN-2020-10 | refinedweb | 2,244 | 61.77 |
As of right about now, you should be able to mosey on over to the DxCore Community Plug-ins page, and grab a copy of CR_MoveFile. This is a plug-in I created primarily as a tool to aid in working in a TDD environment, but which certainly has uses for non-TDD applications. It does basically what
the name suggests, it allows you to move a file from one directory in your solution/project structure to another, even one in a different project. I implemented this as a code provider (since it could change the functionality if you move the file from one project to another), so it will appear in the Code menu when you have the cursor somewhere within the beginning blocks of a file (“using” sections, namespace declaration, or class/interface/struct declarations). Once selected you are presented with a popup window which has a tree that represents your current solution structure, with your current directory highlighted. You can use the arrow keys to navigate the directories and choose a new home for your file.
If you move files between projects, the plug-in will create project references for you, so you don’t need to worry about that. When the file is moved the file contents remain unchanged, so all namespaces will be the same as they were originally. I did this mostly to keep the plug-in simple, but also because I could see situations where this would be good, and situations where this would be bad, and it seemed like this was a bad choice to make for people. I’ve been using this plug-in on a day-to-day basis for a while now, and things seem pretty clean, I did run into a small issue, however, using it within a solution that was under source control. At this point you need to make sure the project files effected by the move are checked out, otherwise the plug-in goes through the motions, but doesn’t actually do anything, which is quite annoying. There is also no checking going on to make sure the language is the same between the source and target project, so if you work on a solution that contains C# and VB.Net projects, you have to be careful not to move files around to projects that can’t understand what they are (oh, and the project icons used on the tree view are all the same, so there is no visual indication of what project contains what type of files).
That’s pretty much it. Clean, simple, basic. Used with other existing CodeRush/Refactor tools like “Move Type To File” and “Move to Namespace”, this provides for some pretty powerful code re-organization. Just make sure you run all of your tests :). | http://drrandom.org/category/coderush/ | CC-MAIN-2018-51 | refinedweb | 466 | 61.8 |
Scratchbox 2 With Maemo
Please go to the official Maemo SDK+ project, they provide support for their stuff. Instructions here on this page are not supported, nobody will care if you have trouble with them.
Rough Guide to Combining Maemo and SB2
Get yourself a toolchain
Download CodeSourcery toolchain. Select "ARM GNU/Linux" for Target Platform, and "IA32 GNU/Linux" for Host Platform. Save the file to your $HOME directory.
Get a Qemu
First you need to get a decently functioning qemu for your system, it must have -drop-ld-preload supported. You will also need gnu make 3.81 or newer to build scratchbox2.
Tool Time
Second important thing is to install a debian/etch system somewhere on your host using debootstrap. It will be used as the build tools. We'll assume you've put it into /etch_root. You need to install into it all those tools that you need for building your software (typically "apt-get build-deps gedit" is a good set to start from).
If you don't feel like installing a second linux distro just to build some stuff for the internet tablets, it's perfectly possible to use your host system. Simply omit the "-t /etch_root" option from the sb2-init command. Hopefully you do understand that it'll make creating .debs a little hard unless you're running a debian or its derivative on your host.
SB2 Specifics
$ wget $ tar jxf sbox2-1.99.0.22.tar.bz2 $ cd sbox2-1.99.0.22 $ ./autogen.sh $ ./configure --prefix=$HOME/scratchbox2 $ make $ make install
If you're on amd64 version of your distribution, you need to build sb2 slightly differently in order to get both 32bit and 64bit versions of the libsb2.so. Make sure you have both versions of glibc and related development files on your host, then replace the last three lines of the above with:
$ make install-multilib prefix=$HOME/scratchbox2
Now continue setting up the toolchain and your target buildroot:
$ cd $HOME $ tar jxf arm-2007q3-51-arm-none-linux-gnueabi-i686-pc-linux-gnu.tar.bz2 $ cd arm-2007q3/bin $ ln -s arm-none-linux-gnueabi-gcc arm-none-linux-gnueabi-cc $ for f in *; do ln -s $f $(echo $f | sed 's/arm-none-linux-gnueabi-/arm-linux-/'); done $ mkdir $HOME/buildroot $ cd $HOME/buildroot $ wget $ tar zxf maemo-sdk-rootstrap_3.1_armel.tgz $ $HOME/scratchbox2/bin/sb2-init -m maemo -t /etch_root maemo $HOME/arm-2007q3/bin/arm-linux-gcc
Add $HOME/scratchbox2/bin to your PATH.
Functionality Verification
After completing the above steps you should verify with a simple hello-world style program that everything is working. Put this into hello.c:
#include <stdio.h> int main(int argc, char **argv) { printf("Hello, World!\n"); return 0; }
Compile it with:
$ sb2 gcc -o hello hello.c
Run it with:
$ sb2 ./hello
For a more thorough test do:
$ sb2 apt-get update $ sb2 apt-get source maemopad $ cd maemopad-1.5 $ sb2 dpkg-buildpackage -rfakeroot -d | http://freedesktop.org/wiki/Software/sbox2/Maemo/?action=subscribe | CC-MAIN-2014-41 | refinedweb | 499 | 57.67 |
New Version Available: "RDF 1.1 Concepts and Abstract Syntax" (Document Status Update, 25 February 2014)
The RDF Working Group has produced a W3C Recommendation for a new version of RDF which adds features to this 2004 version, while remaining compatible. Please see "RDF 1.1 Concepts and Abstract Syntax" for a new version of this document, and the "What's New in RDF 1.1" document for the differences between this version of RDF and RDF 1.1. framework for representing information in the Web.
RDF Concepts and Abstract Syntax defines an abstract syntax on which RDF is based, and which serves to link its concrete syntax to its formal semantics. It also includes discussion of design goals, key concepts, datatyping, character normalization and handling of URI references., and the abstract syntax is defined in section 6 of this document.
Section.
RDF uses the following key concepts:").
Each triple represents a statement of a relationship between the things denoted by the nodes that it links. Each triple has three parts:
The direction of the arc is significant: it always points toward the object.
The nodes of an RDF graph are its subjects and objects.
The assertion of an RDF triple says that some relationship, indicated by the predicate, holds between the things denoted by subject and object of the triple. The assertion of an RDF graph amounts to asserting all the triples in it, so the meaning of an RDF graph is the conjunction (logical AND) of the statements corresponding to all the triples it contains. A formal account of the meaning of RDF graphs is given in [RDF-SEMANTICS].
A node may be a URI with optional fragment identifier (URI reference, or URIref), a literal, or blank (having no separate form of identification). Properties a predicate identifies a relationship between the things represented by the nodes it connects. A predicate, but has no intrinsic name..
Datatypes are used by RDF in the representation of values such as integers, floating point numbers and dates.
A datatype consists of a lexical space, a value space and a lexical-to-value mapping, see section 5.
For example, the lexical-to-value 5.1)..).I references to identify resources and properties. Certain URI references.
Vocabulary terms in the rdf: namespace are listed in section 5.1 of the RDF syntax specification [RDF-SYNTAX]. Some of these terms are defined by the RDF specifications to denote specific concepts. Others have syntactic purpose (e.g. rdf:ID is part of the RDF/XML syntax)., if there is a bijection M between the sets of nodes of the two graphs, such that:
With this definition, M shows how each blank node in G can be replaced with a new blank node to give G'.
A URI reference within an RDF graph (an RDF URI reference) is a Unicode string [UNICODE] that:
The encoding consists of:.
Note: The restriction to absolute URI references is found in this abstract syntax. When there is a well-defined base URI, concrete syntaxes, such as RDF/XML, may permit relative URIs as a shorthand for such absolute URI references.
Note: Because of the risk of confusion between RDF URI references that would be equivalent if derefenced, the use of %-escaped characters in RDF URI references is strongly discouraged. See also the URI equivalence issue of the Technical Architecture Group [TAG]..
Note: ill-formed.
Note: In application contexts, comparing the values of typed literals (see section 6.5.2) is usually more helpful than comparing their syntactic forms (see section 6.5.1). Similarly, for comparing RDF Graphs, semantic notions of entailment (see [RDF-SEMANTICS]) are usually more helpful than syntactic equality (see section 6.3).), i).
application/rdf+xmlis archived at .
There were no substantive changes.
The following editorial changes have been made: | http://www.w3.org/TR/2004/REC-rdf-concepts-20040210/ | CC-MAIN-2017-13 | refinedweb | 633 | 56.45 |
By Neo4j Staff | August 4, 2011 week, so it’s available to all registered beta-testers of the Heroku PAAS platform. This is the first step in our efforts to provide hosted Neo4j Servers to a number of PAAS providers. And the Neo4j Add-On is currently available for free with the test plan, which we think is pretty cool.
What is a Graph Database ?
> heroku config.
The first link points to the Neo4j Web Administration UI which allows you to visualize and manage certain aspects of your graph database.
The second link (REST-URL) is used by your application to connect to the Neo4j Server.
(Please note that unlike the Neo4j Server available from this setup requires a username and password for basic authentication).
The Backup & Restore feature allows to you to pull the current content of your database anytime (suspending the server meanwhile) and replacing the available content with alternative data-sets.
For your convenience we began to provide datasets and will add more of them in the future so that you can start to explore larger graphs right away.
Please note that for saving capacity we suspend idle instances after 24 hours of inactivity. The fist request of a suspended instance will take longer than usual as the instance has to be resumed.
Your first ApplicationThe simplest setup for a application using the Neo4j-Server would be using a REST library like rest-client. Anyway the usage with rest-client would look like this.
irb> p JSON.parse(RestClient.get ENV['NEO4J_URL'])
Remember that instances of Neo4j are suspended after periods of inactivity, so the first call to an instance may take a little longer than normal as the instance is resumed.
A gem like neography encapsulates all the low-level details of the API and provides a clean object-oriented way of interacting with the Neo4j-Server.
# i_love_you.rb
require 'sinatra'
require 'neography'
neo = Neography::Rest.new(ENV['NEO4J_URL'] || "")
def create_graph(neo)
# procedural API
me = neo.get_root
pr = neo.get_node_properties(me)
return if pr && pr['name']
neo.set_node_properties(me,{"name" => "I"})
you = neo.create_node("name" => "you")
neo.create_relationship("love", me, you)
end
create_graph(neo)
get '/' do
## object-oriented API
me = Neography::Node.load(neo, 0)
rel = me.rels(:love).first
you = rel.end_node
"#{me.name} #{rel.rel_type} #{you.name}"
end
(J)Ruby Server Side ExtensionsThe Neo4j Heroku Add-On contains one more new piece of functionality that can be added to the Neo4j-Server: the ability to extend behavior of the server in other languages than Java.
Since we want to support efficient execution of code written in dynamic programming languages like Ruby, we’ve provided a means to move code to the server where it is executed performantly close to the underlying database. We employ JRuby on the server to run rack-applications packaged as ruby gems (for easier management versioning).
Those rack applications can use gems like neo4j.rb to access Neo4j directly without any intermediate or remote layers. This allows a more granular, batch oriented domain level REST API to your front-end providing (or consuming) all the information that has to be exchanged with the graph database in one
go.
A simple example of an application split into a persistence back-end and a front-end hosted on heroku is available on the Heroku documentation)
Hi. May I test this addon? I'm founder of | https://neo4j.com/blog/heroku-neo4j-add-on-available-in-private-beta/ | CC-MAIN-2016-36 | refinedweb | 563 | 56.25 |
Summary
I think I'm really starting to get this "event" thing down. I can tell because, at the end of the "Web Frameworks Jam," half the people were saying "when are we going to do this again?"
Admittedly it was a small group—eight plus me—but I had as much fun and learned as much as I think I possibly could have, so it didn't really matter. And the small group was actually so nice that I even thought of limiting the size in the future. The manageability alone was great: we could all go to lunch together, hikes were much easier to coordinate, and everyone fit easily into the house and kitchen during barbueques.
It's very probable that the next event, which I hope to hold before the end of the year, will be a TurboGears Jam, because a few of us learned enough about it to jumpstart everyone else in a group, and because it seems worthwhile to focus on one framework (plus this is the one that my team liked the most!).
Here are some blogs and photos about the event:
You can hear a podcast that we recorded at the end of the workshop . From this, you'll find out that we broke up into three groups. One worked with the Google Web Toolkit (with the help of Jim White, who had experience with it), one worked with Spring and Hibernate (although mostly Spring), and Barry Hawkins, Dianne Marsh and myself worked with TurboGears.
easy_install rocks, and that's what TurboGears uses. Python finally has a system to rival Perl's CPAN, and it's about time. Philip Eby really put a lot of thought into easy_install, and it should be part of the standard Python library.
Something I'd like to see is all the optional libraries installed as part of the TurboGears easy_install, because those libraries are often used in the tutorials. Tutorials should be created with the attitude that any little setback is going to cause some percentage of people to give up, or postpone it and maybe not come back.
I've been learning a bit more about Python deployment. For example, I started using a GoDaddy account for downloads, and then I began programming on it; it supports Python 2.4. Naturally, GoDaddy controls the Python installation so you cannot just install the necessary TurboGears packages in the site-packages directory. However, the only thing you must actually do is put those files someplace and then provide the proper path information so that Python can import the packages. This appears to be feasible on GoDaddy. I could of course install TurboGears on my own server, but I have a fascination with trying to see how far a cheap service like GoDaddy can be pushed.
The only web programming I've done is with CGI (C++, Python and a little PHP), Java (Servlets, JSPs and JDBC, but only in an educational context and never "in anger," and EJBs just from a theoretical and generally puzzled perspective), a bit of PHP, and a fair amount of Zope. So I, like many at the workshop (and I suspect many people in the business), still don't have that much of a sense of web frameworks in general. I think one would need to work with more of these before the general sense of them would begin to dawn.
I did find, however, that TurboGears gave me a nice perspective on the problem it is intended to solve. I think this is the first time that a web framework's functionality has been so obvious (admittedly, I may have been struggling with the "web problem" long enough that it's time for an epiphany).
I think that the TurboGears philosophy of cherry-picking and assembling the "best of breed" tools is a good one. Of course, what constitutes "best of breed" is determined by Kevin Dangoor and his team, but from what I've seen so far I'm pretty happy to let him do that research and make those decisions.
The experience I often have with libraries and frameworks consists of the "point of disconnect." That is, things make sense for awhile and then at some point an idea is introduced that feels slightly wonky. That's usually the point where the designer's model got stretched too thin, or their domain knowledge was lacking.
So far, everything about TurboGears has fit nicely into my head in a very straightforward fashion, elegantly and what we often refer to as "pythonic." In particular, this is to me the clearest example of where the division into model-view-controller makes sense. The tg-admin quickstart command even produces files called model.py and controllers.py. In fact, the only thing I would call a hiccup in all this is the naming of the @expose decorator. It took us a fair amount of time to figure out what it meant, whereas if it had been named @view instead, it would have been much more obvious. (I'm guessing that expose came from CherryPy, but there's no reason that the TurboGears decorator couldn't be renamed).
You can start every project using tg-admin quickstart. This provides enough of a framework to actually run, so you can get "hello, world" for you application right away, and then begin adding to it. There's even support for unit testing, which Barry looked at and found to be adequate (we often relied on him for opinions, because he's built enterprise systems using other frameworks, notably Java).
The "model" support provided by TurboGears is about easy connection to the database. Of course, the actual model for an application usually comprises more than this, notably business logic. However, in TurboGears that part is considered to vary too significantly between projects and so you write that code yourself (my impression is that EJB was about trying to reuse business objects, and that didn't seem to work out so well).
Once you run tg-admin quickstart, you go into the dev.cfg file and tell it how to connect to your database; this is a single-line URI similar to what you do for JDBC. The tutorial suggests sqlite, but we found MySQL to be more straightforward—although you do need to create the database and establish permissions, this is what you normally have to do with a database anyway.
Although I do not have broad experience with object-relational mapping (ORM) techniques, and Alex Martelli has said that every one that he's seen makes him ill, the one used by TurboGears, called SQLObjects, seems to me to be quite elegant and reasonable. It performs the most common operations without requiring you to write SQL, but if you need to do something special it's easy to drop into SQL. This seems like a good compromise to me.
You create a database model component by writing a class containing static fields and inheriting from SQLObject. Here's an example for a comment collector:
class Comment(SQLObject): name = UnicodeCol() email = UnicodeCol() body = UnicodeCol()
When you run tg-admin sql create, you'll automatically create a table in your database called comment, containing 4 fields: the three you see above, plus an autoincrement id field to uniquely identify each object. You can access id programmatically, and you can create "alternate ID" fields which can also be used to look up objects.
When you create a new Comment object, SQLObject will create a new row in your database. When you assign to one of the fields, SQLObject will update that row (you can turn on batch mode for more efficiency). You can also do things like joins and foreign keys when you define the fields in an object.
SQLObject is not limited to creating its own databases. You can work with an existing database by setting a metadata tag in the class.
It appears that, by the time version 1.0 of TurboGears comes out, they will be using SQLAlchemy because it is significantly faster. However, you will still be able to continue using the syntax of SQLObject, which will be automatically adapted to SQLAlchemy.
The structure of the controllers defined in the controllers.py file that tg-admin quickstart generates for you comes from the CherryPy server that TurboGears has selected for the controller system.
For each page in the system, you create controller code as a method in a class, then you indicate to CherryPy (via TurboGears) that you want this method to be called whenever the page is visited. For example, here is the controllers.py file that tg-admin quickstart creates for you:
import turbogears from turbogears import controllers class Root(controllers.Root): @turbogears.expose("test.templates.welcome") def index(self): import time return dict(now=time.ctime())
You can name your controller class anything you'd like, but it must inherit from controllers.Root. The @expose decorator (which, as I previously mentioned, I'd like to be @view instead) tells the method where it should hand off its data and control when it's finished; in this case it's the kid file test/templates/welcome.kid. The name of the method becomes the last part of the URL used to invoke that method.
If you are submitting a form to one of these methods, the form variables appear as named arguments. So in my comment-collector, if the form contains the fields name, email and body, the controller could look like this:
import turbogears from turbogears import controllers class CommentCollector(controllers.Root): @turbogears.expose("commentcollector.templates.store") def store(self, name, email, body): Comment(name=name, email=email, body=body) return dict()
The @expose decorator indicates that "store" is a legal page to call, and that the resulting page will be displayed using commentcollector/templates/store.kid.
When you create an SQLObject class, it automatically makes a constructor for you that allows you to pass in the fields for your class. By creating a Comment object, I automatically insert a new row into the database.
When you return a value from the method, you return it as a dictionary (a Python dict, aka map or associative array). Ordinarily the results will be displayed via a Kid template, in which case the dictionary is transparently available inside that template. You can also redirect to another page. It's even possible to display the results as JSON (a kind of second-generation version of AJAX) by simply annotating with @expose("json"). MochiKit is also integrated into TurboGears.
You can create or import multiple classes in your controllers.py file, which allows you to partition functionality into classes that might be reusable.
You display pages using the Kid templating language. This is a simple syntax added on top of HTML, so learning to use it is not onerous. As mentioned earlier, the dictionary that you return from your controller method is automatically passed in and available inside the Kid template. So if you wanted to create a page displaying the comment, after writing a controller method that returns a dictionary with the appropriate data, you could insert that data like this:
<h3 py:This is replaced.</h3> <h3 py:This is replaced.</h3> <h3 py:This is replaced.</h3>
Notice how the Kid syntax is relatively succinct, and makes use of the existing html. You can even view a Kid template directly with your browser, which is often quite helpful.
Kid has the usual set of constructs for doing things like looping to display a list, but these are very Python-like so they tend to be easy to read and remember. For example, a for loop:
<ul> <li py: I like ${fruit}s </li> </ul>
Notice how it uses the closing html tag to delimit the loop, and compare this with other systems; JSPs for example.
Kid doesn't limit you—you can embed python scripts within your Kid templates if you really want, although you then run the risk of mixing your MVC components together.
One thing I like a lot about Kid and the way TurboGears uses it is the "inheritance" model, so that you can establish the basic look and feel of all your pages in master.kid and then each new page inherits that look and feel (unless you tell it otherwise). This is almost always what you want to do on a site, and I think they've solved the problem nicely.
During the workshop, we initially had the ambition that we were going to jump in and start building something new right away, but then we got reasonable and decided to just work through the existing tutorial first (the Spring/Hibernate group also decided this). This was the "20-minute Wiki," which ended up taking us about 2 days to get through (admittedly interrupted by hikes and dinners and the like).
It took so long because we kept hitting snags, which we attributed primarily to the fact that TurboGears is pre-1.0 and moving quite quickly—there were three version changes during the workshop, which fixed a number of our issues (thus, the list below may indicate more errors than there actually are). We could certainly understand how it would be difficult to be maintaining the tutorial at the same time, but it made for a hard out-of-the-box experience. In fact, we all agreed that if we hadn't had the other people in our team to keep us going, we would have given up. Despite that, our general experience with TurboGears was that everything was very polished, so we expect that by the time version 1.0 comes out the tutorial should be polished as well.
Dianne created this list with the idea that, if you want to go through the tutorial before 1.0 comes out, it may help you get over some of the trouble spots that we ran into.
When you finish the "20-minute Wiki" tutorial, here's another, more sophisticated tutorial by Brian Beck.
import logging import cherrypy import turbogears from turbogears import controllers, expose, redirect from wiki20 import json from wiki20.model import page from docutils.core import publish_parts from turbogears.toolbox.catwalk import CatWalk; import model log = logging.getLogger("wiki20.controllers") class Root(controllers.RootController): catwalk = CatWalk(model) @expose(template="wiki20.templates.page") def index(self, pagename="FrontPage"): p=page.byPagename(pagename) content=publish_parts(p.data,writer_name="html")["html_body"] return dict(page=p, data=content )
class Root(controllers.RootController): catwalk = CatWalk(model) @expose(template="wiki20.templates.page") def index(self, pagename="FrontPage"): page=page.byPagename(pagename) content=publish_parts(page.data, writer_name="html")["html_body"] return dict(page=page, data=content )
TypeError: save() got an unexpected keyword argument 'submit'Bug report review of turbogears revealed the magic fix:
def save(self, pagename, data, **kwds) or def save(self, pagename, data, submit)
Page handler: <bound method Root.save of <wiki20.controllers.Root object at 0x015D9A50>> save 252, in _execute_func assert isinstance(output, basestring) or isinstance(output, dict) \ AssertionError: Method Root.save() returned unexpected output. Output should be of type basestring, dict or generator.Changing to: raise turbogears.redirect(“/%s”%pagename) works fine.
The server encountered an unexpected condition which prevented it from fulfilling the request. @expose("wiki20.templates.pagelist") @expose("json") def pagelist(self): pages = [page.pagename for page in Page.select(orderBy=Page.q.pagename)] return dict(pages=pages) Page handler: <bound method Root.pagelist of <wiki20.controllers.Root object at 0x015F3D90>> pagelist File "<string>", line 3, in pagelist 251, in _execute_func output = errorhandling.try_call(func, *args, **kw) File "c:\python24\lib\site-packages\TurboGears-0.9a5-py2.4.egg\turbogears\errorhandling.py", line 71, in try_call return func(self, *args, **kw) TypeError: pagelist() takes exactly 1 argument (2 given)Fix follows, but then get JSON format WHENEVER pagelist is viewed (even without tg_format=json parameter). Tutorial pretty much broken from this point on. Verified by running a version that Bruce had working (same result).
@expose("wiki20.templates.pagelist") @expose("json") def pagelist(self, *args, **kwds): pages = [page.pagename for page in Page.select(orderBy=Page.q.pagename)] return dict(pages=pages)
tg.mochikit_all = Truewith
tg.include_widgets = ['turbogears.mochikit']
Have an opinion? Readers have already posted 7 comments about this weblog entry. Why not add yours?
If you'd like to be notified whenever Bruce Eckel adds a new entry to his weblog, subscribe to his RSS feed. | http://www.artima.com/weblogs/viewpost.jsp?thread=170038 | CC-MAIN-2016-40 | refinedweb | 2,739 | 53.41 |
In ghci object code loader, linking against the previous temp dylib is not enough on OS X
joelteon encountered this issue in a rather complicated program, and we worked out the cause over IRC.
Suppose that ghci needs to sequentially load three modules A, B and C, where B refers to symbols from A and C refers to symbols from both A and B. (For example modules B, C and another module D all contain Template Haskell that refers to the previous module(s).) The object code linker currently works like this:
- Link the module
A.ointo a dylib
ghc_1.dylib
- Link the module
B.oagainst
ghc_1.dylibinto a new dylib
ghc_2.dylib
- Link the module
C.oagainst
ghc_2.dylibinto a new dylib
ghc_3.dylib
As a result,
ghc_2.dylib ends up with a NEEDED (or whatever it's called in Mach-O) entry for
ghc_1.dylib, and
ghc_3.dylib ends up with a NEEDED entry for
ghc_2.dylib.
However, this apparently does not satisfy the OS X
dlopen implementation, which complains about a missing symbol
_A_foo referenced by
ghc_3.dylib and which is defined in
ghc_1.dylib. Apparently the dynamic loader only checks the direct dependencies when trying to resolve undefined symbols.
(The Linux dynamic loader seems to be perfectly happy to use an indirect dependency to resolve an undefined symbol. But I found out that the linker gold has the same sort of behavior as the OS X dynamic loader. I don't know whether there is any standard here, but it seems that we cannot rely on the Linux dynamic loader's behavior.)
Presumably the fix is to keep track of all the previous temporary dylibs (rather than just one in
last_temp_so) and link against all of them when building a new temporary dylib. I'm slightly worried about quadratic behavior here, but I think it's unlikely to be an issue in practice.
I have a reproducer at (which I'll add to the test suite next) and oherrala ran the test on an OS X system with the following results:
=====> Last(ghci) 1167 of 4480 [0, 0, 0] cd ./ghci/scripts && HC="/Users/oherrala/gits/ghc/inplace/bin/ghc-stage2" HC_OPTS="-dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-warn-tabs -fno-ghci-history " "/Users/oherrala/gits/ghc/inplace/bin/ghc-stage2" -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-warn-tabs -fno-ghci-history --interactive -v0 -ignore-dot-ghci +RTS -I0.1 -RTS <Last.script > Last.run.stdout 2> Last.run.stderr Actual stderr output differs from expected: --- /dev/null 2015-04-18 17:23:26.000000000 +0300 +++ ./ghci/scripts/Last.run.stderr 2015-04-18 17:23:34.000000000 +0300 @@ -0,0 +1,9 @@ +ghc-stage2: panic! (the 'impossible' happened) + (GHC version 7.11.20150418 for x86_64-apple-darwin): + Loading temp shared object failed: dlopen(/var/folders/64/90jfy8lj65bcm1k02syxz_l80000gn/T/ghc18812_0/libghc18812_12.dylib, 5): Symbol not found: _LastA_a_closure + Referenced from: /var/folders/64/90jfy8lj65bcm1k02syxz_l80000gn/T/ghc18812_0/libghc18812_12.dylib + Expected in: flat namespace + in /var/folders/64/90jfy8lj65bcm1k02syxz_l80000gn/T/ghc18812_0/libghc18812_12.dylib + +Please report this as a GHC bug: + Actual stdout output differs from expected: --- ./ghci/scripts/Last.stdout 2015-04-18 16:26:55.000000000 +0300 +++ ./ghci/scripts/Last.run.stdout 2015-04-18 17:23:34.000000000 +0300 @@ -1,3 +1,2 @@ 3 4 -7 *** unexpected failure for Last(ghci) | https://gitlab.haskell.org/ghc/ghc/-/issues/10322 | CC-MAIN-2021-17 | refinedweb | 568 | 50.73 |
Wrapper that simplifies reading global configuration
This is a wrapper module that simplifies reading global configuration. Go ahead and jump to examples
bower install pwf-config
Config has very few methods.
When module is initialized, it reads variable
sys from global scope and uses it as config data. If it finds
sys to be undefined, it waits for pwf-jquery-compat module to be initialized and then tries the same again.
Use
new_cfg as configuration data
pwf.use({'models':{'url':{'browse':'/api/{model}/browse'}}});
Lookup this
path in config. Return
def if not found.
console.log(pwf.config.get('models.url.browse'));...'/api/{model}/browse'
You need to pass JSON encoded data to your page and ensure it is defined before pwf-config initializes. Like this:
This config is later accessible with
pwf.config.get('some_config.value');. If you don't want to use this method, see #use | https://www.npmjs.com/package/pwf-config | CC-MAIN-2015-27 | refinedweb | 148 | 50.63 |
CodernityDB - a Fast Pure Python NoSQL Database
Jedrzej Nowak, Codernity,
CodernityDB is an open source, pure Python without 3rd party dependency, fast, multi platform, schema-less, NoSQL database.
You can also call it a more advanced key-value database, with multiple key-values indexes (not an index that you probably know from SQL databases) in the same engine (for sure it's not "simple key/value store"). What do we mean by advanced key-value database? Imagine several "standard" key-value databases inside one database. You can also call it a database with one primary index and several secondary indexes. Having this layout and having programmable database behavior gives quite a lot of possibilities. Some of them will be described in this article. At first we will focus on very fast overview of CodernityDB architecture. A more detailed description will come in further sections.
Web Site:
Version described: 0.4.2
System requirements: Python 2.6-2.7
License & Pricing: Apache 2.0
Support: directly via db [at] codernity.com and
General information
To install CodernityDB just run:
pip install CodernityDB
And that's all.
CodernityDB is fully written in Python 2.x. No compilation needed at all. It's tested and compatible with:
- CPython 2.6, 2.7
- PyPy 1.6+ (and probably older)
- Jython 2.7a2+
It's mainly tested on Linux environments, but it will work everywhere where Python runs fine. You can find more details about test process in how it's tested in documentation.
CodernityDB is one of projects developed and released by Codernity, so you can contact us directly in any case via db [at] codernity.com (please consider checking FAQ section first)
Do you want to contribute? Great! Then just fork our repository () on Bitbucket and do a pull request. It can’t be easier! CodernityDB and all related projects are released under Apache 2.0 license. To fill a bug, please also use Bitbucket.
CodernityDB index
What is that mysterious CodernityDB index?
At first, you have to know that there is one main index called id. CodernityDB object is kind off "smart wrapper" around different indexes. CodernityDB cannot work without id index. It's the only requirement. If it's hard to you to understand it, you can treat that mysterious id index as key-value database (well, in fact it is a key/value store). Each index will index data by key and it will associate the value for it. When you insert data into database, you always in fact insert it into the main index called id. Then the database passes your data to all other indexes. You can't insert directly into the index. The data inside the secondary indexes is associated with primary one. So there is no need to duplicate stored data inside the index (with_doc=True when querying index). The exception from that rule is when you really care about performance, having data also in secondary indexes doesn't require a background query to id index to get data from it. That is also the reason why you need to have an index in your database from the beginning, otherwise you will need to re-index your new or changed index, when you have records already in the database.
CodernityDB index is nothing less or more than a python class that is added to database. You can compare it to a read-only table that you may know from the SQL databases, or View from CouchDB. Here is an example of very simple index that will index data by x value:
class XHashIndex(HashIndex): def __init__(self, *args, **kwargs): kwargs['key_format'] = 'I' super(XHashIndex, self).__init__(*args, **kwargs) def make_key_value(self, data): x = data.get['x'] return x, None def make_key(self, key): return key
As you can see it is very easy python class. Nothing non standard for Pythonista, right ? Having such index in the database allows you to make queries about data in the database that has the "x" value. The important parts are for sure make_key_value make_key and key_format. make_key_value function can return:
- value, data: data has to be a dictionary, record will be indexed
- value, None: no data associated with that record in this index (except main data from id index)
- None: record will be not indexed by this index
You can find a detailed description in the documentation. And don't worry, you will not need to add indexes every time you want to use the database. CodernityDB saves them on disk (in _indexes directory), and loads them when you open database.
Please look closer at the class definition: you will find there that our index is a subclass of HashIndex. In CodernityDB we have currently implemented:
* Hash Index - Hash map implementation
Pros
- Fast
- "Simple"
Cons
- Records are not in the order of insert / update / delete but in random order
- Can be queried only for given key, or iterate over all keys
* B+Tree Index (called also Tree index, because it's shorter) - B+Tree structure implementation
Pros
- Can be queried for range queries
- Records are in order (depending on your keys)
Cons
- Slower than Hash based indexes
- More "complicated" than Hash one
You should spend a while to decide which index is correct for your use case (or use cases). Currently you can define up to 255 indexes. Please keep in mind that having more indexes slows down the insert / update / delete operations, but it doesn't affect the get operations at all, because get is made directly from index.
You can perform following basic operations on indexes: * get - single get * get_many - get more records with the same key, key range * all - to get all records in given index. Because it is quite common to associate more than one record in secondary index with one record in primary we implemented something called: Multikey index.
Writing a whole python class just to change some parts of the method, like indexing y instead of x, would be not very user friendly. Thus we created something that we call IndexCreator. It is some kind of meta-language that allows you to create your simple indexes faster and much easier. Here is an example of exactly the same XHashIndex:
name = MyTestIndex type = HashIndex key_format = I make_key_value: x, None
When you add that index to CodernityDB, the custom logic behind IndexCreator creates the python code from it. We created helpers for common things that you might need in your index. (IndexCreator docs), so once you get used to it, it is pretty straightforward.
As you maybe already noticed, you can split data to separate "tables/collections" with CodernityDB index (Tables, Collections in docs). All that you need is to have separate index for every table / collection that you want to have.
You should avoid operations directly on indexes, you should always run index methods / operations via Database object like:
db.get('x', 15) db.run('x', 'some_funct', *args, **kwargs)
Usage
You should have now a basic knowledge about CodernityDB internals and you may wonder how easy is to use CodernityDB. Here is an example::
#!/usr/bin/env python from CodernityDB.database import Database def main(): db = Database('/tmp/tut1') db.create() for x in xrange(100): print db.insert(dict(x=x)) for curr in db.all('id'): curr[x] += 1 db.update(curr) if __name__ == '__main__': main()
This is the fully working example. Adding this index from previous section will allow us to do for example:
print db.get('x', 15)
For detailed usage please refer to the quick tutorial on our web site.
Index functions
You can easily add index side functions to a database. While adding them to an embedded database might make no sense for you, adding them to the server version is very recommended. Those functions have direct access to the database and index objects. You can for example define a function for the x index :
def run_avg(self, db_obj, start, end): l = [] gen = db_obj.get_many( 'x', start=start, end=end, limit=-1, with_doc=True) for curr in gen: l.append(curr['doc']['t']) return sum(l) / len(l)
Then when you will execute it with:
db.run('x', 'avg', 0, 10)
You get the answer directly from the database. Please keep in mind, that changing the code of these functions doesn't require re-indexing your database.
Server version
CodernityDB is an embedded database engine by default, but we also created a server version. We created also the CodernityDB-PyClient that allows you to use the server version without any code changes except:
from CodernityDBPyClient import client client.setup()
Those 2 lines of will patch further CodernityDB imports to use the server version instead. You can migrate from the embedded version to the server version in seconds (+ time needed to download requirements from pypi). The Gevent library is strongly recommended for the server version
Future of CodernityDB
We're currently working on or have already released (it depends when you will read this article) the following features:
- TCP server that comes with TCP client, exactly on the same way as HTTP one
- Permanent changes index, used for "simple" replication for example
- Message Queue system (single guaranteed delivery with confirmation)
- Change subscription / notification (subscribe to selected database events)
If these features are not yet released and you want to have them before the public release, send us a mail and tell what are you interested in.
For advanced users
ACID
CodernityDB never overwrites existing data. The id index is always consistent. Other indexes can be always restored, refreshed (CodernityDB.database.Database.reindex_index() operation) from it.
In given time, just one writer is allowed to write into a. The database doesn’t allow multiple object operations and has no support for typical transaction mechanism (like SQL databases have). But single object operation is fully atomic. To handle multiple updates to the same document we use the _rev (like CouchDB) field, which informs us about the document version. When rev is not matched with one from the database, the write operation is refused. There is also nothing like delayed write in the default CodernityDB implementation. After each write, internals and file buffers are flushed and then the write confirmation is returned to the user.
CodernityDB does not sync kernel buffers with disk itself, it relies on system/kernel to do so. To be sure that data is written to the disk please call fsync(), or use CodernityDB.patch.patch_flush_fsync() to call fsync always when the flush is called after data modification.
Sharding in indexes
If you expect that one of your database indexes (id is the most common one there) might be bigger than 4GB, you should shard it. It means that instead of having one big index, CodernityDB splits the index into parts, still leaving API as it would be single index. It gives you about 25% performance boost for free.
Custom storage
If you need, you can define your custom storage. The default storage uses marshal to pack and unpack objects. It should be the best for most use cases, but you have to remember that you can serialize with it only basic types. Implementing a custom storage is very easy. For example you can implement a storage that uses Pickle (or cPickle) instead of marshal, then you will be able to store your custom classes and make some fancy object store. Anything is possible in fact. If you prefer you can implement remote storage. The sky is the limit there. The other reason might be implementing a secure storage. You can define a storage like this and you will get an encrypted transparent storage mechanism. No one without access to the key will be able to decrypt it.
More Database Knowledge
Database Tutorials and Videos
This article was originally published in the Fall 2013 issue of Methods & Tools | http://www.methodsandtools.com/tools/codernitydb.php | CC-MAIN-2015-22 | refinedweb | 1,969 | 64.1 |
Odd but understandable request (it's much easier to read a file if the ns's
are human-readable).
If you're using wsdl2java, you probably already have a wsdl, so you can use
that! You can simply go into your META-INF/services.xml and set
useOriginalwsdl to true (note the odd capitlization). Make sure that
whatever wsdl you are using for deployment is named the same as the .aar
file (or maybe that should be - make sure your .aar file has the same name
as the wsdl). Also make sure to put the WSDL and any related XSDs in the
META-INF folder.
Charles Koppelman
DrFirst.com
On Fri, Aug 28, 2009 at 2:20 PM, Hegerich, Robert L, JR (Bob) <
hegerich@alcatel-lucent.com> wrote:
> Is there a way to invoke wsdl2java so that instead of the default namespace
> prefixes (ns1, ns2, etc.) you can specify the namespace prefix for a
> namespace? We're replacing a gSOAP server with Apache Axis (2.1.3) and the
> client insists that the namespace prefixes be kept the same for some unknown
> reason.
>
> I see the Qname() constructor does allow for prefix selection but nothing
> in command line seems to invoke it. Perhaps something in -E?
>
> K.R.
> Bob H.= | http://mail-archives.apache.org/mod_mbox/axis-java-user/200908.mbox/%3C79202d6f0908281200w24fdce8odcc69311c932150b@mail.gmail.com%3E | CC-MAIN-2017-39 | refinedweb | 210 | 76.11 |
allocate space for a huge array
#include <malloc.h> void __huge *halloc( long int numb, size_t size );
The halloc() function allocates space for an array of numb objects of size bytes each, and initializes each object to 0. When the size of the array is greater than 64K bytes, then the size of an array element must be a power of 2, since an object could straddle a segment boundary.
A far pointer (of type void __huge *) to the start of the allocated memory. NULL is returned if there is insufficient memory available. NULL is also returned if the size of the array is greater than 64K bytes, and the size of an array element is not a power of 2.
#include <stdio.h> #include <malloc.h> void main() { long int __huge *big_buffer; big_buffer = (long int __huge *) halloc( 1024L, sizeof(long) ); if( big_buffer == NULL ) { printf( "Unable to allocate memory\n" ); } else { /* ... */ hfree( big_buffer ); /* deallocate */ } }
WATCOM
calloc(), _expand(), free(), hfree(), malloc(), _msize(), realloc(), sbrk() | https://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/src/halloc.html | CC-MAIN-2022-33 | refinedweb | 163 | 70.53 |
In this post I am showing how to implement Column Grouping in SIlverlight 3 DataGrid,
Before starting make sure you have SIlverlight3_Tools installed in your system, not SIlverlight 3 Beta, as there are lost of changes in Grouping of Columns from SIlverlight 3 Beta to Silverlight 3 RTW.
You can follow the few simple steps below to get the Grouping for your Silverlight 3 DataGrid.
Else you can also download the code demonstrated from here.
1. Create an Silverlight Application using you Visual Studio 2008 IDE, and add a hosting web application in the project.
2. Once you have done with this, you will get an two projects in your Solution, you need to code only in your silverlight project.
3. Now add a C# class file called Person.cs in your Silverlight Project. This class is used to provide some sample data to Silverlight DataGrid, you can change this to your other datasources like SQL, XML, etc.
I have added the following lines of code in Person.cs
class Person
2: {
3: public string FirstName { get; set; }
4: public string LastName { get; set; }
5: public string City { get; set; }
6: public string Country { get; set; }
7: public int Age { get; set; }
8:
9: public List<Person> GetPersons()
10: {
11: List<Person> persons = new List<Person>
12: {
13: new Person
14: {
15: Age=32,
16: City="Bangalore",
17: Country="India",
18: FirstName="Brij",
19: LastName="Mohan"
20: },
21: new Person
22: {
23: Age=32,
24: City="Bangalore",
25: Country="India",
26: FirstName="Arun",
27: LastName="Dayal"
28: },
29: new Person
30: {
31: Age=38,
32: City="Bangalore",
33: Country="India",
34: FirstName="Dave",
35: LastName="Marchant"
36: },
37: new Person
38: {
39: Age=38,
40: City="Northampton",
41: Country="United Kingdom",
42: FirstName="Henryk",
43: LastName="S"
44: },
45: new Person
46: {
47: Age=40,
48: City="Northampton",
49: Country="United Kingdom",
50: FirstName="Alton",
51: LastName="B"
52: },
53: new Person
54: {
55: Age=28,
56: City="Birmingham",
57: Country="United Kingdom",
58: FirstName="Anup",
59: LastName="J"
60: },
61: new Person
62: {
63: Age=27,
64: City="Jamshedpur",
65: Country="India",
66: FirstName="Sunita",
67: LastName="Mohan"
68: },
69: new Person
70: {
71: Age=2,
72: City="Bangalore",
73: Country="India",
74: FirstName="Shristi",
75: LastName="Dayal"
76: }
77: };
78:
79: return persons;
80: }
81: }
4. Now since my data is ready, I will add the DataGrid control in my XAML page, to make the presentation more attractive, I have added few more lines of code.
5. I have also added a ComboBox Control to Select the Grouping Columns name.
My XAML code will look something like this below.
1: <UserControl
2: xmlns:data="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data"
3: x:Class="Silverlight3DataGrid.MainPage"
4: xmlns=""
5: xmlns:x=""
6: xmlns:d=""
7: xmlns:mc=""
8: mc:
9: <Grid x:
10: <Grid.RowDefinitions>
11: <RowDefinition Height="0.154*"/>
12: <RowDefinition Height="0.483*"/>
13: <RowDefinition Height="0.362*"/>
14: </Grid.RowDefinitions>
15:
16: <StackPanel Orientation="Horizontal">
17: <TextBlock Text="Select Sort Criteria"
18:
19:
20: <TextBlock Text=" " />
21:
22: <ComboBox Grid.Row="0"
23: HorizontalAlignment="Left"
24: Width="200"
25: Height="30" x:Name="SortCombo"
26:
27:
28: <ComboBoxItem Content="Country" ></ComboBoxItem>
29: <ComboBoxItem Content="City" ></ComboBoxItem>
30: <ComboBoxItem Content="Age" ></ComboBoxItem>
31: </ComboBox>
32: </StackPanel>
33:
34: <data:DataGrid x:</data:DataGrid>
35:
36: </Grid>
37: </UserControl>
6. Now as my Data and Presentation is ready, its time for me to write some lines of code to Retrieve my sample data, group them into columns and then Bind it to the Grid.
Please find below the rest of the Code which demonstrates how I have Grouped the Columns using PagedCollectionView and PropertyGroupDescription.
1: public partial class MainPage : UserControl
3: PagedCollectionView collection;
4:
5: public MainPage()
6: {
7: InitializeComponent();
8: BindGrid();
9: }
10:
11: private void BindGrid()
12: {
13: Person person = new Person();
14: PersonGrid.ItemsSource = null;
15: List<Person> persons = person.GetPersons();
16: collection = new PagedCollectionView(persons);
17: collection.GroupDescriptions.Add(new
18: PropertyGroupDescription("Country"));
20: PersonGrid.ItemsSource = collection;
21: }
22:
23: private void SortCombo_SelectionChanged(object sender,
24: SelectionChangedEventArgs e)
25: {
26: ComboBoxItem person = SortCombo.SelectedItem as ComboBoxItem;
27: collection.GroupDescriptions.Clear();
28: collection.GroupDescriptions.Add(new
29: PropertyGroupDescription(person.Content.ToString()));
30:
31: PersonGrid.ItemsSource = null;
32: PersonGrid.ItemsSource = collection;
33: }
34: }
Yes its that simple, and its done. I know I have not given much description here, because nothing much to explain here. In the code above I am loading the Grid with my sample data coming from my Person class and default grouping the Person with Country.
Later on SortCombo_SelectionChanged, I am dynamically fetching the Selected column names from the ComboBox and Sorting on that.
Once you will run this application the screen will look something like this as given below.
Grouped by Country
Grouped by City
Grouped by Age
You can group according to your requirement, like Grouping Active and Deleted items, etc.
You can also download the sample code from here.
Cheers
~Brij
Pingback from Silverlight 3 DataGrid Columns Grouping using PagedCollectionView @ Web Design
Link to us
All material is copyrighted by its respective authors. Site design and layout
is copyrighted by DotNetSlackers.
Advertising Software by Ban Man Pro | http://dotnetslackers.com/Community/blogs/bmdayal/archive/2009/08/01/silverlight-3-datagrid-columns-grouping.aspx | CC-MAIN-2013-20 | refinedweb | 872 | 53.81 |
getsubopt - parse suboption arguments from a string
#include <stdlib.h> int getsubopt(char **optionp, char * const *tokens, char **valuep);
The getsubopt() function parses suboption arguments in a flag argument that was initially parsed by getopt(). These suboption arguments must be separated by commas and may consist of either a single token, or a token-value pair separated by an equal sign. Because commas delimit suboption arguments in the option string, they are not allowed to be part of the suboption arguments or the value of a suboption argument. Similarly, because the equal sign separates a token from its value, a token must not contain an equal sign.
The getsubopt() function takes the address of a pointer to the option argument string, a vector of possible tokens, and the address of a value string pointer. If the option argument string at *optionp contains only one suboption argument, getsubopt() updates *optionp to point to the null at the end of the string. Otherwise, it isolates the suboption argument by replacing the comma separator with a null, and updates *optionp to point to the start of the next suboption argument. If the suboption argument has an associated value, getsubopt() updates *valuep to point to the value's first character. Otherwise it sets *valuep to a null pointer.
The token vector is organised as a series of pointers to strings. The end of the token vector is identified by a null pointer.
When getsubopt() returns, if *valuep is not a null pointer then the suboption argument processed included a value. The calling program may use this information to determine if the presence or lack of a value for this suboption is an error.
Additionally, when getsubopt() fails to match the suboption argument with the tokens in the tokens array, the calling program should decide if this is an error, or if the unrecognised option should be passed on to another program.
The getsubopt() function returns the index of the matched token string, or -1 if no token strings were matched.
No errors are defined.
None.
None.
None.
getopt(), <stdlib.h>. | http://pubs.opengroup.org/onlinepubs/7990989775/xsh/getsubopt.html | CC-MAIN-2016-40 | refinedweb | 346 | 51.07 |
Update (2017/06/12): Added BenchmarkDotNet blog post link.
There are many exciting aspects to .NET Core (open source, cross platform, x-copy deployable, etc.) that have been covered in posts on this blog before. To me, though, one of the most exciting aspects of .NET Core is performance. There’s been a lot of discussion about the significant advancements that have been made in ASP.NET Core performance, its status as a top contender on various TechEmpower benchmarks, and the continual advancements being made in pushing it further. However, there’s been much less discussion about some equally exciting improvements throughout the runtime and the base class libraries.
There are way too many improvements to mention. After all, as an open source project that’s very accepting of contributions, Microsoft and community developers from around the world have found places where performance is important to them and submitted pull requests to improve things. I’d like to thank all the community developers for their .NET Core contributions, some of which are specifically called out in this post. We expect that many of these improvements will be brought to the .NET Framework over the next few releases, too. For this post, I’ll provide a tour through just a small smattering of the performance improvements you’ll find in .NET Core, and in particular in .NET Core 2.0, focusing on a few examples from a variety of the core libraries.
NOTE: This blog post contains lots of example code and timings. As with any such timings, take them with a grain of salt: these were taken on one machine in one configuration (all 64-bit processes), and so you may see different results on different systems. However, I ran each test on .NET Framework 4.7 and .NET Core 2.0 on the same machine in the same configuration at approximately the same time, providing a consistent environment for each comparison. Further, normally such testing is best done with a tool like BenchmarkDotNet; I’ve not done so for this post simply to make it easy for you to copy-and-paste the samples out into a console app and try them.
Editor: See the excellent follow-up by Andrey Akinshin where he Measures Performance Improvements in .NET Core with BenchmarkDotNet.
Collections
Collections are the bedrock of any application, and there are a multitude of collections available in the .NET libraries. Not every operation on every collection has been made faster, but many have. Some of these improvements are due to eliminating overheads, such as streamlining operations to enable better inlining, reducing instruction count, and so on. For example, consider this small example with a Queue<T>:
PR dotnet/corefx #2515 from OmariO removed from Enqueue and Dequeue a relatively expensive modulus operation that dominated the costs of these operations. On my machine, this code on .NET 4.7 produces output like this:
00:00:00.9392595
00:00:00.9390453
00:00:00.9455784
00:00:00.9508294
00:00:01.0107745
whereas with .NET Core 2.0 it produces output like this:
00:00:00.5514887
00:00:00.5662477
00:00:00.5627481
00:00:00.5685286
00:00:00.5262378
As this is “wall clock” time elapsed, smaller values are better, and this shows an ~2x increase in throughput!
In other cases, operations have been made faster by changing the algorithmic complexity of an operation. It’s often best when writing software to first write a simple implementation, one that’s easily maintained and easily proven correct. However, such implementations often don’t exhibit the best possible performance, and it’s not until a specific scenario comes along that drives a need to improve performance does that happen. For example, SortedSet<T>‘s ctor was originally written in a relatively simple way that didn’t scale well due to (I assume accidentally) employing an O(N^2) algorithm for handling duplicates. The algorithm was fixed in .NET Core in PR dotnet/corefx #1955. The following short program exemplifies the difference the fix made:
On my system, on .NET Framework this code takes ~7.7 seconds to execute. On .NET Core 2.0, that is reduced to ~0.013s, for an ~600x improvement (at least with 400K elements… as the fix changed the algorithmic complexity, the larger the set, the more the times will diverge).
Or consider this example on SortedSet<T>:
The implementation of Min and Max in .NET 4.7 walks the whole tree underlying the SortedSet<T>, but that’s unnecessary for finding just the min or the max, as the implementation can traverse down to just the relevant node. PR dotnet/corefx #11968 fixes the .NET Core implementation to do just that. On .NET 4.7, this example produces results like:
00:00:01.1427246
00:00:01.1295220
00:00:01.1350696
00:00:01.1502784
00:00:01.1677880
whereas on .NET Core 2.0, we get results like:
00:00:00.0861391
00:00:00.0861183
00:00:00.0866616
00:00:00.0848434
00:00:00.0860198
showing a sizeable decrease in time and increase in throughput.
Even a core workhorse like List<T> has found room for improvement. Due to JIT improvements and PRs like dotnet/coreclr #9539 from benaadams, core operations like List<T>.Add have gotten faster. Consider this small example:
On .NET 4.7, I get results like:
00:00:00.4434135
00:00:00.4394329
00:00:00.4496867
00:00:00.4496383
00:00:00.4515505
and with .NET Core 2.0, I see:
00:00:00.3213094
00:00:00.3211772
00:00:00.3179631
00:00:00.3198449
00:00:00.3164009
To be sure, the fact that we can do 100 million such adds and removes from a list like this in just 0.3 seconds highlights that the operation wasn’t slow to begin with. But over the execution of an app, lists are often added to a lot, and the savings add up.
These kinds of collections improvements expand beyond just the System.Collections.Generic namespace; System.Collections.Concurrent has had many improvements as well. In fact, both ConcurrentQueue<T> and ConcurrentBag<T> were essentially completely rewritten for .NET Core 2.0, in PRs dotnet/corefx #14254 and dotnet/corefx #14126, respectively. Let’s look at a basic example, using ConcurrentQueue<T> but without any concurrency, essentially the same example as earlier with Queue<T> but with ConcurrentQueue<T> instead:
On my machine on .NET 4.7, this yields output like the following:
00:00:02.6485174
00:00:02.6144919
00:00:02.6699958
00:00:02.6441047
00:00:02.6255135
Obviously the ConcurrentQueue<T> example on .NET 4.7 is slower than the Queue<T> version on .NET 4.7, as ConcurrentQueue<T> needs to employ synchronization to ensure it can be used safely concurrently. But the more interesting comparison is what happens when we run the same code on .NET Core 2.0:
00:00:01.7700190
00:00:01.8324078
00:00:01.7552966
00:00:01.7518632
00:00:01.7560811
This shows that the throughput using ConcurrentQueue<T> without any concurrency improves when switching to .NET Core 2.0 by ~30%. But there are even more interesting aspects. The changes in the implementation improved serialized throughput, but even more so reduced the synchronization between producers and consumers using the queue, which can have a more demonstrable impact on throughput. Consider the following code instead:
This example is spawing a consumer that sits in a tight loop dequeueing any elements it can find, until it consumes everything the producer adds. On .NET 4.7, this outputs results on my machine like the following:
00:00:06.1366044
00:00:05.7169339
00:00:06.3870274
00:00:05.5487718
00:00:06.6069291
whereas with .NET Core 2.0, I see results like the following:
00:00:01.2052460
00:00:01.5269184
00:00:01.4638793
00:00:01.4963922
00:00:01.4927520
That’s an ~3.5x throughput increase. But better CPU efficiency isn’t the only impact of the rewrite; memory allocation is also substantially decreased. Consider a small variation to the original test, this time looking at the number of GC collections instead of the wall-clock time:
On .NET 4.7, I get output like the following:
Gen0=162 Gen1=80 Gen2=0
Gen0=162 Gen1=81 Gen2=0
Gen0=162 Gen1=81 Gen2=0
Gen0=162 Gen1=81 Gen2=0
Gen0=162 Gen1=81 Gen2=0
whereas with .NET Core 2.0, I get output like the following:
Gen0=0 Gen1=0 Gen2=0
Gen0=0 Gen1=0 Gen2=0
Gen0=0 Gen1=0 Gen2=0
Gen0=0 Gen1=0 Gen2=0
Gen0=0 Gen1=0 Gen2=0
That’s not a typo: 0 collections. The implementation in .NET 4.7 employs a linked list of fixed-size arrays that are thrown away once the fixed number of elements are added to each; this helps to simplify the implementation, but results in lots of garbage being generated for the segments. In .NET Core 2.0, the new implementation still employs a linked list of segments, but these segments increase in size as new segments are added, and more importantly, utilize circular buffers, such that new segments only need be added if the previous segment is entirely full (though other operations on the collection, such as enumeration, can also cause the current segments to be frozen and force new segments to be created in the future). Such reductions in allocation can have a sizeable impact on the overall performance of an application.
Similar improvements surface with ConcurrentBag<T>. ConcurrentBag<T> maintains thread-local work-stealing queues, such that every thread that adds to the bag has its own queue. In .NET 4.7, these queues are implemented as linked lists of one node per element, which means that any addition to the bag incurs an allocation. In .NET Core 2.0, these queues are now arrays, which means that other than the amortized costs involved in growing the arrays, additions are allocation-free. This can be seen in the following repro:
On .NET 4.7, this yields the following output on my machine:
whereas with .NET Core 2.0 I get:
That’s an ~30% improvement in throughput, and a huge (complete) reduction in allocations and resulting garbage collections.
LINQ
In application code, collections often go hand-in-hand with Language Integrated Query (LINQ), which has seen even more improvements. Many of the operators in LINQ have been entirely rewritten for .NET Core in order to reduce the number and size of allocations, reduce algorithmic complexity, and generally eliminate unnecessary work.
For example, the Enumerable.Concat method is used to create a single IEnumerable<T> that first yields all of the elements of one enumerable and then all the elements of a second. Its implementation in .NET 4.7 is simple and easy to understand, reflecting exactly this statement of behavior:
This is about as good as you can expect when the two sequences are simple enumerables like those produced by an iterator in C#. But what if application code instead had code like the following?
Every time we yield out of an iterator, we return out of the enumerator’s MoveNext method. That means if you yield an element from enumerating another iterator, you’re returning out of two MoveNext methods, and moving to the next element requires calling back into both of those MoveNext methods. The more enumerators you need to call into, the longer the operation takes, especially since every one of those operations involves multiple interface calls (MoveNext and Current). That means that concatenating multiple enumerables grows exponentially rather than linearly with the number of enumerables involved. PR dotnet/corefx #6131 fixed that, and the difference is obvious in the following example, which concatenates 10K enumerables of 10 elements each:
On my machine on .NET 4.7, this takes ~4.12 seconds. On my machine on .NET Core 2.0, this takes only ~0.14 seconds, for an ~30x improvement.
Other operators have been improved substantially by eliminating overheads involved when various operators are used together. For example, a multitude of PRs from JonHanna have gone into optimizing various such cases and into making it easier to add more cases in the future. Consider this example:
Here we create an enumerable of the numbers 10,000,000 down to 0, and then time how long it takes to sort them ascending, skip the first 4 elements of the sorted result, and grab the fifth one (which will be 4, as the sequence starts at 0). On my machine on .NET 4.7, I get output like:
00:00:01.3879042
00:00:01.3438509
00:00:01.4141820
00:00:01.4248908
00:00:01.3548279
whereas with .NET Core 2.0, I get output like:
00:00:00.1776617
00:00:00.1787467
00:00:00.1754809
00:00:00.1765863
00:00:00.1735489
That’s a sizeable improvement (~8x), in this case due primarily (though not exclusively) to PR dotnet/corefx #2401, which avoids most of the costs of the sort.
Similarly, PR dotnet/corefx #3429 from justinvp added optimizations around the common ToList method, providing optimized paths for when the source had a known length, and plumbing that through operators like Select. The impact of this is evident in a simple test like the following:
On .NET 4.7, this provides results like:
00:00:00.1308687
00:00:00.1228546
00:00:00.1268445
00:00:00.1247647
00:00:00.1503511
whereas on .NET Core 2.0, I get results like the following:
00:00:00.0386857
00:00:00.0337234
00:00:00.0346344
00:00:00.0345419
00:00:00.0355355
showing an ~4x increase in throughput.
In other cases, the performance wins have come from streamlining the implementation to avoid overheads, such as reducing allocations, avoiding delegate allocations, avoiding interface calls, minimizing field reads and writes, avoiding copies, and so on. For example, jamesqo contributed PR dotnet/corefx #11208, which substantially reduced overheads involved in Enumerable.ToArray, in particular by better managing how the internal buffer(s) used grow to accommodate the unknown amount of data being aggregated. To see this, consider this simple example:
On .NET 4.7, I get results like:
Elapsed=00:00:01.0548794 Gen0=2 Gen1=2 Gen2=2
Elapsed=00:00:01.1147146 Gen0=2 Gen1=2 Gen2=2
Elapsed=00:00:01.0709146 Gen0=2 Gen1=2 Gen2=2
Elapsed=00:00:01.0706030 Gen0=2 Gen1=2 Gen2=2
Elapsed=00:00:01.0620943 Gen0=2 Gen1=2 Gen2=2
and on .NET Core 2.0, results like:
Elapsed=00:00:00.1716550 Gen0=1 Gen1=1 Gen2=1
Elapsed=00:00:00.1720829 Gen0=1 Gen1=1 Gen2=1
Elapsed=00:00:00.1717145 Gen0=1 Gen1=1 Gen2=1
Elapsed=00:00:00.1713335 Gen0=1 Gen1=1 Gen2=1
Elapsed=00:00:00.1705285 Gen0=1 Gen1=1 Gen2=1
so for this example ~6x faster with half the garbage collections.
There are over a hundred operators in LINQ, and while I’ve only mentioned a few, many of them have been subject to these kinds of improvements.
Compression
The examples shown thus far, of collections and LINQ, have been about manipulating data in memory. There are of course many other forms of data manipulation, including transformations that are heavily CPU-bound in nature. Investments have also been made in improving such operations.
One key example is compression, such as with DeflateStream, and several impactful performance changes have gone in here. For example, in .NET 4.7, zlib (a native compression library) is used for compressing data, but a relatively unoptimized managed implementation is used for decompressing data; PR dotnet/corefx #2906 added .NET Core support for using zlib for decompression as well. And PR dotnet/corefx #5674 from bjjones enabled using a more optimized version of zlib produced by Intel. These combine to a fairly dramatic effect. Consider this example, which just creates a large array of (fairly compressible) data:
On .NET 4.7, for this one compression/decompression operation I get results like:
00:00:00.7977190
whereas with .NET Core 2.0, I get results like:
00:00:00.1926701
Cryptography
Another common source of compute in a .NET application is the use of cryptographic operations. Improvements can be seen here as well. For example, in .NET 4.7, SHA256.Create returns a SHA256 type implemented in managed code, and while managed code can be made to run very fast, for very compute-bound computations it’s still hard to compete with the raw throughput and compiler optimizations available to code written in C/C++. In contrast, for .NET Core 2.0, SHA256.Create returns an implementation based on the underlying operating system, e.g. using CNG on Windows or OpenSSL on Unix. The impact can be seen in this simple example that hashes a 100MB byte array:
On .NET 4.7, I get:
00:00:00.7576808
whereas with .NET Core 2.0, I get:
00:00:00.4032290
Another nice improvement for zero code changes.
Math
Mathematical operations are also a large source of computation, especially when dealing with large numbers. Through PRs like dotnet/corefx #2182, axelheer made some substantial improvements to various operations on BigInteger. Consider the following example:
On my machine on .NET 4.7, this outputs results like:
00:00:05.6024158
The same code on .NET Core 2.0 instead outputs results like:
00:00:01.2707089
This is another great example of a developer caring a lot about a particular area of .NET and helping to make it better for their own needs and for everyone else that might be using it.
Even some math operations on core integral types have been improved. For example, consider:
PR dotnet/coreclr #8125 replaced DivRem with a faster implementation, such that on .NET 4.7 I get results like:
00:00:01.4143100
and on .NET Core 2.0 I get results like:
00:00:00.7469733
for an ~2x improvement in throughput.
Serialization
Binary serialization is another area of .NET that can be fairly CPU/data/memory intensive. BinaryFormatter is a component that was initially left out of .NET Core, but it reappears in .NET Core 2.0 in support of existing code that needs it (in general, other forms of serialization are recommended for new code). The component is almost an identical port of the code from .NET 4.7, with the exception of tactical fixes that have been made to it since, in particular around performance. For example, PR dotnet/corefx #17949 is a one-line fix that increases the maximum size that a particular array is allowed to grow to, but that one change can have a substantial impact on throughput, by allowing for an O(N) algorithm to operate for much longer than it previously would have before switching to an O(N^2) algorithm. This is evident in the following code example:
On .NET 4.7, this code outputs results like:
76.677144
whereas on .NET Core 2.0, it outputs results like:
6.4044694
showing an ~12x throughput improvement for this case. In other words, it’s able to deal with much larger serialized inputs more efficiently.
Text Processing
Another very common form of computation in .NET applications is the processing of text, and a large number of improvements have gone in here, at various levels of the stack.
Consider Regex. This type is commonly used to validate and parse data from input text. Here’s an example that uses Regex.IsMatch to repeatedly match phone numbers:
On my machine on .NET 4.7, I get results like:
Elapsed=00:00:05.4367262 Gen0=820 Gen1=0 Gen2=0
whereas with .NET Core 2.0, I get results like:
Elapsed=00:00:04.0231373 Gen0=248
That’s an ~25% improvement in throughput and an ~70% reduction in allocation / garbage collections, due to a small change in PR dotnet/corefx #231 that made a fix to how some data is cached.
Another example of text processing is in various forms of encoding and decoding, such as URL decoding via WebUtility.UrlDecode. It’s often the case in decoding methods like this one that the input doesn’t actually need any decoding, but the input is still passed through the decoder in case it does. Thanks to PR dotnet/corefx #7671 from hughbe, this case has been optimized. So, for example, with this program:
on .NET 4.7, I see the following output:
Elapsed=00:00:01.6742583 Gen0=648
whereas on .NET Core 2.0, I see this output:
Elapsed=00:00:01.2255288 Gen0=133
Other forms of encoding and decoding have also been improved. For example, dotnet/coreclr #10124 optimized the loops involved in using some of the built-in Encoding-derived types. So, for example, this code that repeatedly encodes an ASCII input string as UTF8:
on .NET 4.7 produces output for me like:
00:00:02.4028829
00:00:02.3743152
00:00:02.3401392
00:00:02.4024785
00:00:02.3550876
and on .NET Core 2.0 produces output for me like:
00:00:01.6133550
00:00:01.5915718
00:00:01.5759625
00:00:01.6070851
00:00:01.6070767
These kinds of improvements extend as well to general Parse and ToString methods in .NET for converting between strings and other representations. For example, it’s fairly common to use enums to represent various kinds of state, and to use Enum.Parse to parse a string into a corresponding Enum. PR dotnet/coreclr #2933 helped to improve this. Consider the following code:
On .NET 4.7, I get results like:
Elapsed=00:00:00.9529354 Gen0=293
Elapsed=00:00:00.9422960 Gen0=294
Elapsed=00:00:00.9419024 Gen0=294
Elapsed=00:00:00.9417014 Gen0=294
Elapsed=00:00:00.9514724 Gen0=293
and on .NET Core 2.0, I get results like:
Elapsed=00:00:00.6448327 Gen0=11
Elapsed=00:00:00.6438907 Gen0=11
Elapsed=00:00:00.6285656 Gen0=12
Elapsed=00:00:00.6286561 Gen0=11
Elapsed=00:00:00.6294286 Gen0=12
so not only an ~33% improvement in throughput, but also an ~25x reduction in allocations and associated garbage collections.
Or consider PRs dotnet/coreclr #7836 and dotnet/coreclr #7891, which improved DateTime.ToString with formats “o” (the round-trip date/time pattern) and “r” (the RFC1123 pattern), respectively. The result is that given code like this:
on .NET 4.7 I see output like:
Elapsed=00:00:03.7552059 Gen0=949
Elapsed=00:00:03.6992357 Gen0=950
Elapsed=00:00:03.5459498 Gen0=950
Elapsed=00:00:03.5565029 Gen0=950
Elapsed=00:00:03.5388134 Gen0=950
and on .NET Core 2.0 output like:
Elapsed=00:00:01.3588804 Gen0=87
Elapsed=00:00:01.3932658 Gen0=88
Elapsed=00:00:01.3607030 Gen0=88
Elapsed=00:00:01.3675958 Gen0=87
Elapsed=00:00:01.3546522 Gen0=88
That’s an almost 3x increase in throughput and a whopping ~90% reduction in allocations / garbage collections.
Of course, there’s lots of custom text processing done in .NET applications, beyond using built in types like Regex/Encoding and built-in operations like Parse and ToString, often built directly on top of string, and lots of improvements have gone into operations on String itself.
For example, String.IndexOf is very commonly used to find characters in strings. IndexOf was improved in dotnet/coreclr #5327 by bbowyersmyth, who’s submitted a bunch of performance improvements for String. So this example:
on .NET 4.7 produces results for me like this:
00:00:05.9718129
00:00:05.9199793
00:00:06.0203108
00:00:05.9458049
00:00:05.9622262
whereas on .NET Core 2.0 it produces results for me like this:
00:00:03.1283763
00:00:03.0925150
00:00:02.9778923
00:00:03.0782851
for an ~2x improvement in throughput.
Or consider comparing strings. Here’s an example that uses String.StartsWith and ordinal comparisons:
Thanks to dotnet/coreclr #2825, on .NET 4.7 I get results like:
00:00:01.3097317
00:00:01.3072381
00:00:01.3045015
00:00:01.3068244
00:00:01.3210207
and on .NET Core 2.0 results like:
00:00:00.6239002
00:00:00.6150021
00:00:00.6147173
00:00:00.6129136
00:00:00.6099822
It’s quite fun looking through all of the changes that have gone into String, seeing their impact, and thinking about the additional possibilities for more improvements.
File System
Thus far I’ve been focusing on various improvements around manipulating data in memory. But lots of the changes that have gone into .NET Core have been about I/O.
Let’s start with files. Here’s an example of asynchronously reading all of the data from one file and writing it to another (using FileStreams configured to use async I/O):
A bunch of PRs have gone into reducing the overheads involved in FileStream, such as dotnet/corefx #11569 which adds a specialized CopyToAsync implementation, and dotnet/corefx #2929 which improves how asynchronous writes are handled, and so when running this on .NET 4.7 I get results like:
Elapsed=00:00:09.4070345 Gen0=14 Gen1=7 Gen2=1
and on .NET Core 2.0, results like:
Elapsed=00:00:06.4286604 Gen0=4 Gen1=1 Gen2=1
Networking
Networking is a big area of focus now, and likely will be even more so moving forward. A good amount of effort is being applied to optimizing and tuning the lower-levels of the networking stack, so that higher-level components can be built efficiently.
One such change that has a big impact is PR dotnet/corefx #15141. SocketAsyncEventArgs is at the center of a bunch of asynchronous operations on Socket, and it supports a synchronous completion model whereby asynchronous operations that actually complete synchronously can avoid costs associated with asynchronous completions. However, the implementation in .NET 4.7 only ever synchronously completes operations that fail; the aforementioned PR fixed the implementation to allow for synchronous completions of all async operations on sockets. The impact of this is very obvious in code like the following:
This program creates two connected sockets, and then writes 1,000,000 times to one socket and receives on the other, in both cases using asynchronous methods but where the vast majority (if not all) of the operations will complete synchronously. On .NET 4.7 I see results like:
Elapsed=00:00:20.5272910 Gen0=42 Gen1=2 Gen2=0
whereas on .NET Core 2.0 with most of these operations able to complete synchronously, I see results instead like:
Elapsed=00:00:05.6197060 Gen0=0 Gen1=0 Gen2=0
Not only do such improvements accrue to components using sockets directly, but also to using sockets indirectly via higher-level components, and other PRs have resulted in additional performance increases in higher-level components, such as NetworkStream. For example, PR dotnet/corefx #16502 re-implemented Socket’s Task-based SendAsync and ReceiveAsync operations on top of SocketAsyncEventArgs and then allowed those to be used from NetworkStream.Read/WriteAsync, and PR dotnet/corefx #12664 added a specialized CopyToAsync override to support more efficiently reading the data from a NetworkStream and copying it out to some other stream. Those changes have a very measurable impact on NetworkStream throughput and allocations. Consider this example:
As with the previous Sockets one, we’re creating two connected sockets. We’re then wrapping those in NetworkStreams. On one of the streams we write 1K of data a million times, and on the other stream we read out all of its data via a CopyToAsync operation. On .NET 4.7, I get output like the following:
Elapsed=00:00:24.7827947 Gen0=220 Gen1=3 Gen2=0
whereas on .NET Core 2.0, the time is cut by 5x, and garbage collections are reduced effectively to zero:
Elapsed=00:00:04.9452962 Gen0=0 Gen1=0 Gen2=0
Further optimizations have gone into other networking-related components. For example, SslStream is often wrapped around a NetworkStream in order to add SSL to a connection. We can see the impact of these changes as well as others in an example like the following, which just adds usage of SslStream on top of the previous NetworkStream example:
On .NET 4.7, I get results like the following:
Elapsed=00:00:21.1171962 Gen0=470 Gen1=3 Gen2=1
.NET Core 2.0 includes changes from PRs like dotnet/corefx #12935 and dotnet/corefx #13274, both of which together significantly reduce the allocations involved in using SslStream. When running the same code on .NET Core 2.0, I get results like the following:
Elapsed=00:00:05.6456073 Gen0=74 Gen1=0 Gen2=0
That’s 85% of the garbage collections removed!
Concurrency
Not to be left out, lots of improvements have gone into infrastructure and primitives related to concurrency and parallelism.
One of the key focuses here has been the ThreadPool, which is at the heart of the execution of many .NET apps. For example, PR dotnet/coreclr #3157 reduced the sizes of some of the objects involved in QueueUserWorkItem, and PR dotnet/coreclr #9234 used the previously mentioned rewrite of ConcurrentQueue<T> to replace the global queue of the ThreadPool with one that involves less synchronization and less allocation. The net result in visible in an example like the following:
On .NET 4.7, I see results like the following:
Elapsed=00:00:03.6263995 Gen0=225 Gen1=51 Gen2=16
Elapsed=00:00:03.6304345 Gen0=231 Gen1=62 Gen2=17
Elapsed=00:00:03.6142323 Gen0=225 Gen1=53 Gen2=16
Elapsed=00:00:03.6565384 Gen0=232 Gen1=62 Gen2=16
Elapsed=00:00:03.5999892 Gen0=228 Gen1=62 Gen2=17
whereas on .NET Core 2.0, I see results like the following:
Elapsed=00:00:02.1797508 Gen0=153 Gen1=0 Gen2=0
Elapsed=00:00:02.1188833 Gen0=154 Gen1=0 Gen2=0
Elapsed=00:00:02.1000003 Gen0=153 Gen1=0 Gen2=0
Elapsed=00:00:02.1024852 Gen0=153 Gen1=0 Gen2=0
Elapsed=00:00:02.1044461 Gen0=154 Gen1=1 Gen2=0
That’s both a huge improvement in throughput and a huge reduction in garbage collections for such a core component.
Synchronization primitives have also gotten a boost in .NET Core. For example, SpinLock is often used by low-level concurrent code trying either to avoid allocating lock objects or minimize the time it takes to acquire a rarely contended lock, and its TryEnter method is often called with a value of 0 in order to only take the lock if it can be taken immediately, or else fail immediately if it can’t, without any spinning. PR dotnet/coreclr #6952 improved that fail fast path, as is evident in the following test:
On .NET 4.7, I get results like:
00:00:02.3276463
00:00:02.3174042
00:00:02.3022212
00:00:02.3015542
00:00:02.2974777
whereas on .NET Core 2.0, I get results like:
00:00:00.3915327
00:00:00.3953084
00:00:00.3875121
00:00:00.3980009
00:00:00.3886977
Such an ~6x difference in throughput can make a significant impact on hot paths that exercise such locks.
That’s just one example of many. Another is around Lazy<T>, which was rewritten in PR dotnet/coreclr #8963 by manofstick to improve the efficiency of accessing an already initialized Lazy<T> (while the performance of accessing a Lazy<T> for the first time matters, the expectation is that it’s accessed many times after that, and thus we want to minimize the cost of those subsequent accesses). The effect is visible in a small example like the following:
On .NET 4.7, I get results like:
00:00:02.6769712
00:00:02.6789140
00:00:02.6535493
00:00:02.6911146
00:00:02.7253927
whereas on .NET Core 2.0, I get results like:
00:00:00.5278348
00:00:00.5594950
00:00:00.5458245
00:00:00.5381743
00:00:00.5502970
for an ~5x increase in throughput.
What’s Next
As I noted earlier, these are just a few of the many performance-related improvements that have gone into .NET Core. Search for “perf” or “performance” in pull requests in the dotnet/corefx and dotnet/coreclr repos, and you’ll find close to a thousand merged PRs; some of them are big and impactful on their own, while others whittle away at costs across the libraries and runtime, changes that add up to applications running faster on .NET Core. Hopefully subsequent blog posts will highlight additional performance improvements, including those in the runtime, of which there have been many but which I haven’t covered here.
We’re far from done, though. Many of the performance-related changes up until this point have been mostly ad-hoc, opportunistic changes, or those driven by specific needs that resulted from profiling specific higher-level applications and scenarios. Many have also come from the community, with developers everywhere finding and fixing issues important to them. Moving forward, performance will be a bigger focus, both in terms of adding additional performance-focused APIs (you can see experimentation with such APIs in the dotnet/corefxlab repo) and in terms of improving the performance of the existing libraries.
To me, though, the most exciting part is this: you can help make all of this even better. Throughout this post I highlighted some of the many great contributions from the community, and I highly encourage everyone reading this to dig in to the .NET Core codebase, find bottlenecks impacting your own apps and libraries, and submit PRs to fix them. Rather than stumbling upon a performance issue and working around it in your app, fix it for you and everyone else to consume. We are all very excited to work with you on bringing such improvements into the code base, and we hope to see all of you involved in the various .NET Core repos.. | https://blogs.msdn.microsoft.com/dotnet/2017/06/07/performance-improvements-in-net-core/?replytocom=295115 | CC-MAIN-2017-43 | refinedweb | 5,758 | 67.65 |
32308/how-to-check-wether-a-bucket-exists-using-boto3
You can use this code to check whether the bucket is available or not
import boto3
s3 = boto3.resource('s3')
print(s3.Bucket('priyajdm') in s3.buckets.all())
This could be very expensive call depending on how many times the all() must ask AWS for next bucket.
Instead check creation_date: if it is None then it doesn't exist:
import boto3
s3 = boto3.resource('s3')
print(
"Bucket does not exist"
if s3.Bucket('priyajdm').creation_date is None
else
"Bucket exists")
You can delete the folder by using ...READ MORE
You can use the below command
$ aws ...READ MORE
There is a particular format that works ...READ MORE
Here is the code to attach a ...READ MORE
The error is basically saying that you ...READ MORE
There are three ways in which you ...READ MORE
Using Client versioning you can create folders ...READ MORE
You can use a for loop to ...READ MORE
You can use method of creating object ...READ MORE
You can delete the file from S3 ...READ MORE
OR | https://www.edureka.co/community/32308/how-to-check-wether-a-bucket-exists-using-boto3 | CC-MAIN-2019-39 | refinedweb | 181 | 78.35 |
Introduction
This series has concentrated on new features in PHP V5.3, such as namespaces, closures, object handling, object-oriented programming, and Phar. While these flashy new features are a welcome addition to the language, PHP V5.3 was also designed to further streamline PHP. It builds upon the popular and stable PHP V5.2 and enhances the language to make it more powerful. In this article, learn about changes and considerations when upgrading from PHP V5.2.
Syntax changes
Additions to the language, with namespaces and closures (discussed in Part 2 and Part 3), have
added more reserved words. Starting in PHP V5.3,
namespace can no longer be used as an identifier. The
closure class is now a reserved class, but it is still a valid identifier. Listing 1 shows examples of statements that no longer work in PHP V5.3 because of the additional reserved words.
Listing 1. Invalid PHP statements
// the function definition below will throw a fatal error in PHP 5.3, but is perfectly // valid in 5.2 function namespace() { .... } // same with this class definition class Closure { .... }
Support for the
goto statement was also added in PHP V5.3. Now
goto is a reserved word.
goto statements are not common in modern languages (you might remember using them in BASIC), but there are occasionally use cases where they are handy. Listing 2 has an example of how they work.
Listing 2.
goto statements in PHP
echo "This text will get outputted"; goto a; echo "This text will get skipped"; a: echo "This text will get outputted";
One possible use case for
gotos is for breaking out of deeply nested loops and
if statements. This will make the code much clearer to read.
Changes to functions and methods
Though there are no major changes to functions and methods in PHP V5.3, there are a few enhancements to help with outstanding issues in PHP and to improve performance. This section discusses a few of the more notable changes.
In previous versions of PHP, the array functions
atsort,
natcasesort,
usort,
uasort,
uksort,
array_flip, and
array_unique let you pass objects instead of arrays as parameters. The functions then treat the properties of the objects as the array keys and values. This is no longer available in PHP V5.3, so you need to cast the objects to arrays first. Listing 3 shows how to change your code.
Listing 3. Changing code to cast objects to arrays for certain functions
$obj = new stdClass; $obj->a = '1'; $obj->b = '2'; $obj->c = '3'; print_r(array_flip($obj)); // will NOT work in PHP 5.3, but will in PHP 5.2 print_r(array_flip((array) $obj)); // will work in PHP 5.3 and 5.2
The magic class methods are now much more strictly enforced. The following methods must have public visibility:
__get
__set
__isset
__unset
__call
You can use the new
__callStatic() magic method in cases
where
__call was used in a static context as a workaround
for this change. The required arguments for these methods are enforced and must be
present with the exception of the
__isString() magic method, which accepts no parameters. Listing 4 shows how to use these methods and the required parameters for them.
Listing 4. Using the magic methods
class Foo { public function __get($key) {} // must be public and have one parameter public function __set($key,$val) {} // must be public and have two parameters public function __toString() {} must be public and have no parameters }
Several functions that previously were not supported on PHP with Windows are now supported in PHP V5.3. For example, the
getopt() function is designed to parse the options for calling a PHP script from the command line.
inet_ntop() and
inet_pton(), the functions for encoding and decoding Internet addresses, now work under Windows® as well. There are several math functions, such as
asinh(),
acosh(),
atanh(),
log1p(), and
expm1(), which now have Windows support.
Extension changes
The PHP Extension C Library (PECL), has been the breeding ground for new extensions in PHP. Once an extension is mature and stable and is viewed as a useful function for part of the core distribution, it is often added during major version changes. In this spirit, starting in PHP V5.3, the following extensions are part of the core PHP distribution.
- FileInfo
- Provides functions that help detect the content type and encoding of a file by looking at certain magic byte character sequences in the file.
- intl
- A wrapper for the International Components for Unicode (ICU) library, providing functions for unicode and globalization support.
- Phar
- A PHP archiving tool discussed in Part 4.
- mysqlnd
- A native PHP driver for MySQL database access that's a replacement for the earlier MySQL and MySQLi extension which leveraged the libmysql library.
- SQLite3
- A library for using SQLite V3 databases.
When an extension is no longer actively maintained, or is deemed unworthy of distribution with the core PHP distribution, it is often moved to PECL. As part of the shuffling in PHP V5.3, the following extensions have been removed from the core PHP distribution and are maintained as part of PECL.
- ncurses
- An emulation of curses, which is used to display graphical output on the command line.
- fpdf
- Handles building and using forms and form data within PDF documents.
- dbase
- Provides support for reading and writing dbase compatible files.
- fbsql
- Supports database access for Frontbase database servers.
- ming
- An open source library that allows you to create Flash 4 animations.
The Sybase extension has been removed entirely and is superseded by the sybase_ct extension. The sybase_ct extension is fully compatible with the former and should be a drop-in replacement. The newer function will use the Sybase client libraries you need to install on your Web server.
Build changes
With the strong focus on refining the build process in PHP V5.3, it's easier to build PHP on all platforms. To maintain consistency between PHP builds and to provide a guaranteed set of components in PHP, the PCRE, Reflection, and SPL extensions can no longer be disabled in the build. You can build distributable PHP applications that use these extensions and are guaranteed that they will be available for use.
A new team took over the PHP Windows build in the last year. Starting in PHP V5.3, the team will provide several improvements for users on Windows. The new builds will target the 586 architecture (Intel® Pentium® or later) and will require Windows 2000/XP or later, removing support for Windows 98/NT and earlier. PHP builds built with Microsoft® Visual Studio® 2008 and builds targeting the x86-64 architecture will be built. They offer improved performance when working with FastCGI on the Microsoft IIS Web server or with Apache built with the same compiler and architecture. The Windows installer is also being improved to better configure PHP with the Microsoft IIS Web server. The team launched a Web site specific to PHP on Windows (see Resources).
.ini changes
An important feature of PHP is that its behavior can be configured using an .ini file. In PHP V5.3, several problematic directives for this file have been removed, such as the zend.ze1_compatibility_mode setting. You now have tremendously improved flexibility when using this file.
There are two major improvements to the php.ini file:
- You can have variables within the php.ini file. This is very handy for removing redundancies within the file, and it's easier to update the file if changes are needed. Listing 5 shows an example.
Listing 5. Variables in php.ini filefoo and newfoo have the same value.
foo = bar [section] newfoo = ${foo}
- You can make per-directory and per-site PHP ini settings, similar to making those same settings with the Apache configuration files. The advantage here is that the syntax becomes consistent across all of the various SAPIs PHP can run under. Listing 6 shows how this works.
Listing 6. Per-site and per-directory .ini settings
[PATH=/var/www/site1] ; directives here only apply to PHP files in the /var/www/site1 directory [HOST=] ; directives here only apply to PHP files requested from the site.
You can also have these .ini directives created in user-specified .ini files, located in the file system itself, in the same way that .htaccess files work under the Apache HTTP Web server. The default name for this file is specified by the user_ini.filename directive. The feature can be disabled by setting this directive to an empty value. Any per-site and per-directory directives cannot be overridden in a user-specified .ini file.
Deprecated items
PHP V5.3 starts officially deprecating older functions that will no longer be
available in future versions of PHP. When you use these functions, an
E_DEPRECATED error will be emitted. The following functions are deprecated for PHP V5.3:
- Ticks (
declare(ticks=N)and
register_tick_function()), which were designed to have a function call for every n statements executed by the parser within the
declare()block. They're being removed because of numerous breaks in their function and because the feature isn't used very often.
define_syslog_variables(), which initializes all syslog related variables. This function isn't required because the constants it defines are already defined globally. Simply removing this function call should be all that is necessary.
- The
eregregular-expression functions. It's recommended that you use the PCRE regular-expression functions instead, since they are much faster and more consistent with regular expressions used in other languages and applications. Support for the
eregfunctions is being removed so PHP can standardize with one regular-expression engine.
It is recommended that you migrate away from these features with PHP V5.3. Future major PHP releases will drop support for the above items.
Summary
PHP V5.3 has numerous new features and has "cleaned up" several items. There are some backward-compatibility issues. This article provides some guidance for migrating your Web application to work with PHP V5.3. For the latest details regarding PHP V5.3 see the PHP wiki, which has notes on any other changes that might affect your applications.
Resources
Learn
- Learn more about closures from Wikipedia.
- PHP For Windows is dedicated to supporting PHP on Microsoft Windows. It also supports ports of PHP extensions or features, and provides special builds for the various Windows architectures.
- Visit the PHP wiki to learn all about changes to PHP V5.3.
- on learning to program with PHP, see "Learning PHP, Part 1," Part 2, and Part 3.
- Planet PHP is the PHP developer community news source.
- The PHP Manual has information about PHP data objects and their capabilities.
- Visit Safari Books Online for a wealth of resources for open source technologies.
-
- PHPMyAdmin is a popular PHP application that has been packaged in Phar to use as an example of how easy using Phar archives are.
- Get PHP V5.2. | http://www.ibm.com/developerworks/opensource/library/os-php-5.3new5/index.html?ca=dgr-lnxw64Migrate2PHP5.3&S_TACT=105AGY46&S_CMP=grsitejw64 | CC-MAIN-2014-23 | refinedweb | 1,812 | 66.03 |
Red Hat Bugzilla – Bug 183137
ipw2200 doesn't work
Last modified: 2015-01-04 17:25:40 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20060202 Fedora/1.0.7-1.2.fc4 Firefox/1.0.7
Description of problem:
no wireless, no sound, no battery status, no video.
unable to configure wireless, sound.
ACPI: bus type pci registered
PCI: PCI BIOS revision 2.10 entry at 0xfd7ce, last bus=7
PCI: Using MMCONFIG
ACPI: Subsystem revision 20050902
ACPI-0339: *** Error: Looking up [Z00A] in namespace, AE_NOT_FOUND
search_node c18cec40 start_node c18cec40 return_node 00000000
PCI: If a device
doesn't work, try "pci=routeirq". If it helps, post a report
eth0: RealTek RTL8139 at 0xf886e000, 00:16:36:0c:6c:14, IRQ 10
eth0: Identified 8139 chip type 'RTL-8100B/8139D'
8139cp: 10/100 PCI Ethernet driver v1.2 (Mar 22, 2004)
ipw2200: Unknown symbol ieee80211_wx_get_encodeext
ipw2200: Unknown symbol ieee80211_wx_set_encode
ipw2200: Unknown symbol ieee80211_wx_get_encode
ipw2200: Unknown symbol ieee80211_txb_free
ipw2200: Unknown symbol ieee80211_wx_set_encodeext
ipw2200: Unknown symbol ieee80211_wx_get_scan
ipw2200: Unknown symbol escape_essid
ipw2200: Unknown symbol ieee80211_rx
ipw2200: Unknown symbol ieee80211_rx_mgt
ipw2200: Unknown symbol free_ieee80211
ipw2200: Unknown symbol alloc_ieee80211
application mixer_applet2 uses obsolete OSS audio interface
ACPI-0339: *** Error: Looking up [Z00A] in namespace, AE_NOT_FOUND
rtl8129 device eth1 does not seem to be present, delaying initialization.
Command failed: /sbin/modprobe rtl8129
Output:
FATAL: Module rtl8129 not found.
FATAL: Error inserting ieee80211
make -C /lib/modules/2.6.15-1.1831_FC4/build M=/usr/src/ieee80211-1.0.1 MODVERDIR=/usr/src/ieee80211-1.0.1 modules
make[1]: Entering directory `/usr/src/kernels/2.6.15-1.1831_FC4-i686'
make[1]: *** No rule to make target `modules'. Stop.
make[1]: Leaving directory `/usr/src/kernels/2.6.15-1.1831_FC4-i686'
make: *** [modules] Error 2
Version-Release number of selected component (if applicable):
kernel 2.6.15-1.1831_FC4
How reproducible:
Always
Steps to Reproduce:
1. try to configure
2.
3.
Additional info:
let me know if I can provide any aditional info.
Thanks David Sequoias
That bug sounds more like a kernel problem, reassigning to the proper component.
Read ya, Phil
does the wireless driver in 1833 work any better?
(Building your own driver is unsupported, any issues you have with that you
should take to the driver maintainers)
Linux localhost.localdomain 2.6.15-1.1833_FC4 #1 Wed Mar 1 23:41:37 EST 2006
i686 i686 i386 GNU/Linux
ACPI: Subsystem revision 20050902
ACPI-0339: *** Error: Looking up [Z00A] in namespace, AE_NOT_FOUND
search_node c18dcc40 start_node c18dcc40 return_node 00000000
PCI: Using ACPI for IRQ routing
PCI: If a device doesn't work, try "pci=routeirq". If it helps, post a report
PCI: Cannot allocate resource region 7 of bridge 0000:00:1c.0s:06:04.0 failed with error -5
ieee80211_crypt: registered algorithm 'NULL'
ieee80211: 802.11 data/management/control stack, git-1.1.7
ieee80211: Copyright (C) 2004-2005 Intel Corporation <jketreno@linux.intel.com>
I'm seeing the ipw-2.4-boot.fw load fail on FC5 as well.
I found /lib/firmware empty. After I went to the Intel website, downloaded the
driver kit, and populated /lib/firmware, the card now loads and works fine. (Is
this due to RH/FC rejecting binary driver modules from inclusion?)
ipw2200 version 1.1.3m and ieee80211 version 1.1.14 works very well for me. It's
clearly the best version of the ipw2200 driver i've tried so far.
NetworkManager works great with WPA-Personal encrypted networks with hidden SSID
as well as unencrypted wireless networks. That's all i've tried. It seems like
my frequent dropouts are gone or at least minimized in some way.
Note: With the latest 2.6.17 update, you must have the 3.0 firmware installed
from
The original problem in this bug should be fixed, so I'm closing this out.
If there are still any problems with this driver, please file a separate bug.
Thanks. | https://bugzilla.redhat.com/show_bug.cgi?id=183137 | CC-MAIN-2018-05 | refinedweb | 666 | 58.48 |
Welcome To Snipplr
Everyone's Recent C# Snippets
- All /
- JavaScript /
- HTML /
- PHP /
- CSS /
- Ruby /
- Objective C
count the folders and subfolders of a directory0 617 posted 2 years ago by martinbrait
Using foreach to loop over each character letter in a string. Microsoft Official Reference: Explanations from other websites: 580 posted 4 years ago by clinaq
Microsoft Official Documentation: 735 posted 4 years ago by clinaq 704 posted 5 years ago by danh955
This technical tip shows how to Set Line Spacing of the Paragraph in a Shape or Textbox in .NET applications. You can set the line space of the paragraph, its space before and space after using the TextParagraph.LineSpace, TextParagraph.SpaceBefore a...0 1082 posted 835 posted 583 posted 5 years ago by sherazam
This technical tip explains how .NET developers can Crop an EMF Image inside their .NET applications. Image cropping usually refers to the removal of the outer parts of an image to help improve the framing. Cropping may also be used to cut out some p...0 541 posted 694 posted 5 years ago by sherazam
This technical tip explains how .NET developers can Copy Message from one Mailbox folder to another. Aspose.Email API provides the capability to copy message from one mailbox folder to another. It allows copying a single as well as multiple messages...0 632 posted 5 years ago by sherazam
This source code uses public classes and interfaces exposed by GroupDocs.Metadata for .NET to clean metadata from the documents created by a particular author in some directory. Steps include: 1) Scan all documents from an author in a directory (i...0 492 posted 5 years ago by muhammadsabir
This technical tip explains how to .NET developers can list email messages with paging support inside their .NET Applications. In scenarios, where the email server contains a large number of messages in mailbox, it is often desired to list or retriev...0 662 posted 5 years ago by sherazam
Simple record that the content is a Image.0 694 posted 6 years ago by gofast505
Simple script to save Cognex Image using ImageFile.0 1139 posted 6 years ago by gofast505
Configure program for CalAmp lmu-800 and skypatrol nitro gps avl , (works with every device who support AT Command)0 458 posted 6 years ago by hidroxido
remove border in excel from C#0 788 posted 6 years ago by billi8324
Tripwire is a special part of motion detection that can be used to monitor and alert on specific changes. More specifically: tripwire means the detection of intrusion. This code snippet presents how to create a C# software by using prewritten comp...0 658 posted 6 years ago by chewie-wookiee23 posted 6 years ago by sacha-manji32 posted 6 years ago by sacha-manji
A conference call is a meeting, conducted over the phone using audio, between two or more people and usually for the purposes of discussing a particular topic. In my former snippets I dealt with text-to-speech and speech-to-text functionalities. So t...0 634 posted 6 years ago by warnerBro19 808 posted 639 posted 6 years ago by warnerBro19
I am really interested in Computer Vision technology, so I started to dig deeper in this topic. This is how I found the code below for edge detection, a method belonging to object detection. If you are interested in implementing edge detection in C#...0 591 posted 6 years ago by DaniBarros
You can find the full source code for corner detection here. I found this solution on and it worked for me fine. I created a Visual C# WPF application in Visual Studio, and added two .dll files (VOIPSDK.dll and NVA.dll) to the refe...0 560 posted 6 years ago by MahendraGadhavi 824 posted 6 years ago by AdrianVasilyev
Use the code below to lookup IP address in bulk using C-Sharp programming languages and IP2Location MySQL database. In this tutorial, we use the IP2Location LITE database to lookup country of origin from the visitor's IP address. Free databases are a...0 755 posted 6 years ago by Hexahow 641 posted 7 years ago by sacha-manji
This is a simple example to quickly and easily produce a set of links for a given site in your bookmarks from Firefox..0 484 posted 7 years ago by xXxPizzaBreakfastxXx
Use the code below to convert the IP address of your web visitors and lookup for their geographical location, e.g. country, state, city, latitude/longitude, ZIPs, timezone and so on. Free database can be downloaded at 1583 posted 7 years ago by Hexahow
If you are using c++/cli or UnmanagedExports, this is a useful way to share enum int values between c++ and c# code/DLLs without worrying about duplicated code going out of date.0 581 posted 7 years ago by xXxPizzaBreakfastxXx | https://snipplr.com/all?language=c-sharp | CC-MAIN-2022-05 | refinedweb | 814 | 60.75 |
In this Ionic 5/4 tutorial, we’ll discuss how to implement toasts UI components in Ionic Angular application and customize it using CSS styles. that can be used without any plugin. In this post, we will add Ionic Toasts in few steps and also discuss some of the basic methods like Creating, Positioning, and Dismiss toasts.
We’ll discuss how to easily create an Ionic toast example and customize its behavior using properties available and also discuss how to customize it using CSS custom class.
Install or Update latest Ionic CLI
You can install or update the latest version of Ionic CLI package by executing below npm command
$ npm install -g @ionic/cli
Create an Ionic Application
Run the following command to create a new Ionic Angular application with a
blank template
$ ionic start ionic-toasts-app blank --type=angular
Move inside the application folder
$ cd ionic-toasts-app
Adding Toast in Ionic Application
In the Ionic application, we need to import the
ToastController class inside out component to use its
create() method to show a toast message.
In the home.page.ts file make following changes
import { Component } from '@angular/core'; import { ToastController } from '@ionic/angular'; @Component({ selector: 'app-home', templateUrl: 'home.page.html', styleUrls: ['home.page.scss'], }) export class HomePage { constructor(public toastController: ToastController) { } ... ... }
How to create a Toast?
The
ToastController class provides the
create() method which returns a promise callback. Inside the
then() callback we call
present() method to show a toast message.
showToast() { this.toastController.create({ header: 'Hurrayy!', message: 'Added to Cart!', position: 'middle', cssClass: 'my-custom-class', buttons: [ { side: 'end', icon: 'cart', text: 'View Cart', handler: () => { console.log('Cart Button Clicked'); } }, { side: 'end', text: 'Close', role: 'cancel', handler: () => { console.log('Close clicked'); } } ] }).then((obj) => { obj.present(); }); }
Update template with a button
<ion-buttonAdd to Cart</ion-button>
It will look like this
Properties of
create() method
In the method above we have used some options available using which we can modify toast behavior, look and feel.
animated: If
true, the toast will animate.
buttons: An array of buttons for the toast.
color: The color to use from your application’s color palette. Default options are: “
primary“, “
secondary“, “
tertiary“, “
success“, “
warning“, “
danger“, “
light“, “
medium“, and “
dark“. For more information on colors, see theming.
cssClass: Additional classes to apply for custom CSS. If multiple classes are provided they should be separated by spaces.
duration: How many milliseconds to wait before hiding the toast. By default, it will show until dismiss() is called.
enterAnimation: Animation to use when the toast is presented.
header: Header to be shown in the toast.
keyboardClose: If
true, the keyboard will be automatically dismissed when the overlay is presented.
leaveAnimation: Animation to use when the toast is dismissed.
message: Message to be shown in the toast.
mode: The mode determines which platform styles to use.”
ios” | “
md“
position: The position of the toast on the screen.”
bottom” | “
middle” | “
top” Default ‘
bottom‘
translucent: If
true, the toast will be translucent. It only applies when the mode is “
ios” and the device supports backdrop-filter.
You can check more options available here
How to show toast only once and not stack?
In common behavior, Ionic toasts are stacked on the previous one if we call the present method again and again.
The solution to prevent toast stacking
To prevent this and hide previous toast before next comes just modify the above code. Just call the showToastOnce() method after dismissing the any previous open toast.
showOnceToast() { this.toastController.dismiss().then((obj) => { }).catch(() => { }).finally(() => { this.showToast(); }); }
Here I am calling
showToast() method on
final callback which will be called in both cases if there is any open toast or not. Let me know if you have a better workaround for now 😛
How to customize UI toast styling using custom class and CSS custom properties?
We can easily modify Toast by adding a custom class in option property
cssClass: “my-custom-class”
Add below CSS code in global.scss file as toast container is appended outside component page.
.my-custom-class{ --background:#FF4127; --border-color:#852114; --border-radius:1px; --border-style:solid; --border-width:4px; --button-color:#FFDD00; --color:#fff; }
you can check these CSS custom properties available here
Conclusion
We discussed how to create a toast message in the Ionic 5 application and customize the UI style by using the
cssClass property of
create() method.
I hope you liked the tutorial, do share your feedback in the comment section below
Thanks a lot …….. I was troubled with scss changes I made. Didn’t know about globals.scss as it wasn’t reflecting any changes when –width changed in app.scss.
Muchas gracias, me sirvió mucho este tutorial, aunq por algún motivo no me sirvo con el –color, sino a lo normal color. | https://www.freakyjolly.com/ionic-4-adding-toasts-in-ionic-4-application-using-ui-components-with-plugins/ | CC-MAIN-2021-17 | refinedweb | 796 | 56.25 |
Jan 24, 2007 12:44 PM|asimiq80|LINK
Hi,
i'm using ajax library for asp.net, in update panel i've added a button and on button click event i'm reading some data from server and want to save it in a file on client side but it is showing following error when i click the '"-""8L-'.
--------------
Here is a sample code which i'm using to write file against Button click event:
Response.Clear();
byte[] h = { 34, 45, 6, 34, 34, 56, 76, 45, 23 };
Response.ContentType = "application/x-zip-compressed";
Response.OutputStream.Write(h, 0, h.Length);
Response.Flush();
Response.End();
when i place the button outside updatepanel, it works successfully. I need help to resolve this issue or let me know how can i do it asynchronously.
Jan 24, 2007 02:23 PM|Rama Krishna|LINK
Hi
It is not a good practice to use Response.Write.
Jan 25, 2007 03:46 PM|delorenzodesign|LINK
Does anything else cause this issue? I have no instances of Response.Write anywhere in my solution, the only HttpHandlers and HttpModules I have are the ones that are specified for ASP.Net AJAX RTM.
My page has 5 PlaceHolders that simulate a Wizard-type control. This page worked fine with RC1, but something is different with the RTM release.
Sys.WebForms.PageRequestManagerParserErrorException debugging asp.net ajax asp.net ajax rtm
Jan 25, 2007 03:58 PM|Rama Krishna|LINK
Have you analyzed the Response using Fiddler? Can you post some code or if you have the web site public, do you have a URL where the problem can be seen?
Jan 25, 2007 04:00 PM|jkuhlz|LINK
I am having the exact same issue.
It all worked fine with RC1 before upgrading to RTM.
However, after doing a lot of testing, I have found out that all of my update panels work fine if I don't login to my website using the asp.net Login Control. As soon as I log though, some of my update panels quit working. But what I don't get is that some of the update panels don't work, but others do??? Even though they are almost identical? If I log back out, then the update panels start working again?
Some of the errors I am getting> '. '/ble> |html <head> '.So there is a pattern to the errors to? I have did searches to see if I have any Response.Writes and I can't find any.
Jan 25, 2007 08:27 PM|delorenzodesign|LINK
Jan 26, 2007 03:54 PM|jkuhlz|LINK
No Resolutions Yet?
I have incorporated the ELMAH into this website to record and track errors, etc. I was doing some more testing and noticed that every time this error occurs this is the error that gets recorded. So going to look into this. Wanted to post it incase anyone else can come up with a resolution with this error description.
System.Web.HttpException: Server cannot modify cookies after HTTP headers have been sent.
Generated: Fri, 26 Jan 2007 16:43:47 GMT
System.Web.HttpException: Server cannot modify cookies after HTTP headers have been sent.)
Powered by ELMAH, version 2.0.50727.42. Copyright (c) 2004, Atif Aziz, Skybow AG. All rights reserved.
Jan 27, 2007 06:46 PM|liquidboy|LINK
Im just writting to let you know that im also getting the issue.
Update panels after loging-in keep throwing up the above mentioned error.
The postback code in the button which causes the partial page rendering is shown below...
ps.. Was working fine prior to RTM!
Code in ============================
protected void butSaveComment_click(object sender, EventArgs e) { string uid = Request.QueryString["aa1"]; string username = ""; if (taComment.Text.Trim().Length == 0) return; username = Membership.GetUser() == null ? "Anonymous" : Membership.GetUser().UserName; dsBusinessObjectsTableAdapters.RipThatClip_commentsTableAdapter oTA = new dsBusinessObjectsTableAdapters.RipThatClip_commentsTableAdapter(); dsBusinessObjectsTableAdapters.RipThatClip_ClipsTableAdapter oPTA = new dsBusinessObjectsTableAdapters.RipThatClip_ClipsTableAdapter(); string safeHTML_Comment = Server.HtmlEncode(taComment.Text); oTA.Insert(Guid.NewGuid().ToString(), oPTA.GetDataByUID(uid)[0].id, safeHTML_Comment, username, DateTime.Now, uid); oPTA.IncrementComment(uid); divComments.InnerHtml = getComments(); }
Jan 29, 2007 02:21 PM|jkuhlz|LINK
After further testing this issue, I have found that in my particular situation, this error is occuring when the code Roles.GetRolesForUser(username) is executed. Specifically when the username being passed to the function is the username of the person currently logged into the website. If I were to pass some static username to this function, it all works as long as it wasn't the same as the actual username of the person logged in.
For example, I have an ajax-enabled grid-view that displays information. The information that is displayed is based on what role the currently logged in user is in. This grid-view is also available so the general public can see it without logging in.
So when a user comes to the website and views the information in the grid-view all works fine because the Roles.GetRolesForUser() is never called. However as soon as a user logs in to the website, and looks at the grid-view, the Roles.GetRolesForUser() is executed and I get this error when doing a AsyncPostback.> '.
Still looking for an answer.....
Hopefully someone can shed some light here!
Jan 29, 2007 02:38 PM|danehrig|LINK
Based on jkuhlz's post I tried removing all references to the user object in my page, instead sticking them into the session at login time. Unfortunately that didn't fix the problem. I assume that's because roles.getrolesforuser is still getting excecuted when the page is accessed because the directory is protected on a role basis.
It would be nice if one of the MS guys could acknowledge this issue.
Jan 29, 2007 03:12 PM|Steve Marx|LINK
I'm getting some help from the product team on this... I'll update this thread as soon as I know more. If anyone has some easy-to-follow steps to reproduce the bug, that would probably help a lot!
Thanks for all the detailed information so far.
Jan 29, 2007 03:20 PM|danehrig|LINK
Here's steps to duplicate it that I wrote in my other thread about this issue:
Jan 29, 2007 03:29_OKCRealtors_Admin" commandTimeout="30"/> </providers> </roleManager>
I changed the cacheRolesInCookie="true" to cacheRolesInCookie="false" and everything is working fine now????
I am doing more testing, but it seems to have solved my particular issue.
Jan 29, 2007 03:30" commandTimeout="30"/> </providers> </roleManager>
I changed the cacheRolesInCookie="true" to cacheRolesInCookie="false" and everything is working fine now????
I am doing more testing, but it seems to have solved my particular issue.
Jan 29, 2007 03:47 PM|jodywbcb|LINK
Not quite sure what you are trying to accomplish with the UserName and calling the membership functions on this - I do not use the membership feature of .Net but I think you have to pass in a key for the GetUser() as such
Membership.GetUser(Login1.UserName)
-By default a user will either be Anonymous or have a valid username so your assigning null which is going to throw an exception on the dal portion if the table is not setup for nulls (and probably is not) Regardless all this info is already in the HttpContext If the person has access to the directory then they are obviously logged in...try this code instead:
1 protected void butSaveComment_click(object sender, EventArgs e)
2 {
3 string uid = Request.QueryString["aa1"];
4 string username = "";
5
6 if (taComment.Text.Trim().Length == 0) return;
7
8 ///username = Membership.GetUser() == null ? "Anonymous" : Membership.GetUser().UserName;
//check to see if anonymous - assuming this is a role protected directory then anonymous is not allowed:
HttpContext Context = HttpContext.Current;
if ( Context.Request.IsAuthenticated == true) {
9
10 dsBusinessObjectsTableAdapters.RipThatClip_commentsTableAdapter oTA = new dsBusinessObjectsTableAdapters.RipThatClip_commentsTableAdapter();
11 dsBusinessObjectsTableAdapters.RipThatClip_ClipsTableAdapter oPTA = new dsBusinessObjectsTableAdapters.RipThatClip_ClipsTableAdapter();
12
13 string safeHTML_Comment = Server.HtmlEncode(taComment.Text);
14
15 oTA.Insert(Guid.NewGuid().ToString(), oPTA.GetDataByUID(uid)[0].id, safeHTML_Comment,Context.User.Identity.Name()., DateTime.Now, uid);
16 oPTA.IncrementComment(uid);
17
18 divComments.InnerHtml = getComments();
}
19
}
You save the call to the membership as it is already handled .... perhaps that will solve it...
Jan 29, 2007 03:47 PM|Steve Marx|LINK
Glad to hear it!
Are others seeing something different? Following the basic repro steps above, I created a really simple web site and didn't hit any issues.
My" /> <asp:LoginName <asp:UpdatePanel <ContentTemplate> <%= DateTime.Now.ToString() %> <asp:Button </ContentTemplate> </asp:UpdatePanel> </form> </body> </html>
The relevant part of my web.config:
<authorization> <allow users="smarx" /> <deny users="*" /> </authorization> <authentication mode="Forms" />
Jan 29, 2007 03:53 PM|Steve Marx|LINK
FYI, I also tried enabling roles, adding a role and putting the "smarx" user into it and setting cacheRolesInCookie="true", but still no issues.
<roleManager enabled="true" cacheRolesInCookie="true" />
Jan 29, 2007 04:26 PM|danehrig|LINK
Jan 29, 2007 04:30 PM|danehrig|LINK
Steve Marx
FYI, I also tried enabling roles, adding a role and putting the "smarx" user into it and setting
Steve, did you try placing the page in a directory protected by authentication?
To do so, put your page in a subdir with a web.config containing:
<configuration> <appSettings /> <system.web> <authorization> <deny users="?" /> <allow roles="Employee" /> <deny users="*"/> </authorization> </system.web> </configuration>
in this case only a person with the "Employee" role can access any page in the directory.
Jan 29, 2007 05:33 PM|Rama Krishna|LINK
I did the same and I am also not hitting the problem:
<configuration> <appSettings/> <connectionStrings/> <system.web> <authorization > <deny users="?"/> <allow roles="Admin"/> <deny users="*"/> </authorization> </system.web> </configuration>
Jan 29, 2007 06:01 PM|delorenzodesign|LINK
Jan 29, 2007 06:41 PM|Rama Krishna|LINK
Jan 29, 2007 06:53 PM|delorenzodesign|LINK
As an example:
public abstract class CustomSqlRoleProvider : SqlRoleProvider { protected abstract string GetConnectionStringKeyName(); public override void Initialize(string name, NameValueCollection config) { if (config == null) throw new ArgumentNullException("config"); config["connectionStringName"] = GetConnectionStringKeyName(); base.Initialize(name, config); } }</div> <div style="border: 1px dotted rgb(204, 204, 204); background-color: rgb(245, 245, 245);">
public class CustomRoleProvider : CustomSqlRoleProvider</div>
{
protected override string GetConnectionStringKeyName()
{
return string.Format("{0}_Db", Settings.CurrentEnvironment.ToString());
}
}
Jan 29, 2007 06:57 PM|Rama Krishna|LINK
I tried one on MSDN:
It worked like a champ.
Let me try your code.
Jan 29, 2007 07:09 PM|Rama Krishna|LINK
Jan 29, 2007 08:39 PM|liquidboy|LINK
Thanks for your reply but what that line you commented out does is checks if the user is logged in, if not then default the username to 'Anonymous'
( Membership.GetUser() returns null if there is no authenticated user) <== i got this from a microsoft paper on MSDN.
If the user remains anonymous (and doesnt log into my website) then there is never any errors. However as soon as they log in then all cases of the updatepanel where it leveages membership functions fails with the above error?
And the exact same function worked fine on the pre RTM (pre asp.net ajax 1.0) version. So i reverted to it and all works fine!
Ill wait till i hear from product team about a fix for this..
ps. if i removed the membership line altogether then it works fine :( which doesn't do what i want it to do....
Regards
Jan 30, 2007 12:03 AM|jodywbcb|LINK
Jan 30, 2007 12:12 AM|liquidboy|LINK
oh yeah i missed that snippet of code ..
thanks ill do it that way..
Jan 30, 2007 01:12 PM|masthy|LINK
I'm experiencing the Sys.WebForms.PageRequestManagerParserErrorException as well after upgrading to final 1.0 release of ASP.NET AJAX (Worked fine with RC1). But the exeption only occurs with the Safari browser. My page works fine with IE 6/7, Opera 9 and Firefox 1.5/2 (the browsers i've tested). Anyone experiencing the same issue with Safari?
Safari UpdatePanel PageRequestManagerParserErrorException
Jan 31, 2007 07:26 PM|danehrig|LINK
Jan 31, 2007 07:36 PM|Rama Krishna|LINK
Hi
I have posted more details here:
It;s actualyy as bug in the role manager
Feb 01, 2007 04:35 PM|sddavidm|LINK
Feb 01, 2007 08:10 PM|llee|LINK
sddavidm
I am also using Window authentication and still haven't had any luck with this issue.
Member
458 Points
Feb 05, 2007 02:45 PM|michiel1978|LINK
@jkuhlz and all others:
I use the built in SqlMembership and SqlRoleProviders and had constant parsererrors. By removing the cacheroles in cookies I no longer have the parsererrors. That was obscure... Thanks!
Feb 05, 2007 05:30 PM|wsmonroe|LINK
I am having the same issue, and while I am using roles, and the page that I am having problems with manipulates users and their roles,).
It's a pretty straightforward page setup. I was running into the error mentioned in this thread, and tried all the suggestions I've read here. First, I made sure that there are no Response.Write() calls in my page anywhere. Secondly, I switched the cacheRolesInCookie attribute in the roleManager to false. But I still get the same issue? Anything else I can try?
Feb 05, 2007 09:17 PM|liquidboy|LINK
rama khrishna (sorry if i spelt it wrong) talked about the workaround solution here
I implemented the application_onerror one and trapped only for the one that mattered to me..
good luck!!!
None
0 Points
Feb 06, 2007 08:06 AM|markgrubner|LINK
I also use custom providers as I am using MySQL and removing cookie caching fixed this problem for me too.
Thanks very much [:D] - this bug was really pissing me off [:@]
Hopefully someone will come up with a solution to allow the use of caching [;)]
Feb 06, 2007 02:58 PM|wsmonroe|LINK
liquidboy
rama khrishna (sorry if i spelt it wrong) talked about the workaround solution here
I implemented the application_onerror one and trapped only for the one that mattered to me..
good luck!!!
I saw the other thread that describes the bug and possible work arounds. I'm just trying to figure why in my case, disabling the cookie caching doesn't work, when it does seem to work for everyone else. I was able to use the workaround where you set EnableEventValidation="false".
That did work. But I'm troubled by the fact that it seems like everyone can disable cookie caching for roles to fix the problem, but I cannot ...
Feb 14, 2007 09:17 PM|jkardong|LINK
Hi,
I had this exact same issue today and found that my issue was related to the page validation and some hard coded text in a text box that contained "dangerous" characters. I had a textbox that had hard XML in the text area. Without the AJAX update panel, it would throw a warning error. With the AJAX update panel, it would throw the error you saw.
What I did for my testing purposes was switch the validaterequest to FALSE in the page declaration, and this fixed it. I know this is not ideal, but at least points to what the root cause was for me, and maybe you all too....
ValidateRequest="False"
Thanks Jason</div>
Feb 20, 2007 02:58 AM|ffx100|LINK
In my case I was getting this exception trying to execute code like that:
if (Session["serverNames"] == null){
serverNames = new List<string>();
Session["serverNames"] = serverNames;
}
I moved this validation from one function to another and result was always the same – Exception. Exception was troying onle once. After Session variable was declared page was working fine.
I moved Session variable decloration into the Load() as:
if (Session["serverNames"] == null)
{
Session["serverNames"] = serverNames;
}
else
{
serverNames = (List<string>)Session["serverNames"];
}
It fixed my problem.
ajax "Modal Popup" "ModalPopUpExtender"
Feb 27, 2007 05:25 PM|jminond|LINK
I have cacheRolesInCookie="false" in my config, and my loginview inside update panel still wont work.
Like others, the control worked in the preview bits, but since swithing to the latest system.web.extensions dlls, my login is broken.
There are no Response.Writes on my page, however, the loginview does set a cookie / header, and I tihnk that is causing the problem, but I do not know how to fix.
Feb 27, 2007 05:55 PM|forbesoft|LINK
I am having the same problem, but I believe my problem is with the network firewall or other permission settings. The test code is a simple trigger button that updates a literal in the updatepanel with the date/time.
I added the page to three different websites:
1) localhost (desktop at work)
2) production server in house
3) website on my side business web server (co-located)
4) home desktop (IP address known)
I tested these three URL's from four different locations:
1) desktop at work (MSIE 6.0)
2) production server via remote desktop (MSIE 6.0)
3) my biz web server via remote desktop (MSIE 7.0)
4) my home desktop via remote desktop (MSIE 7.0)
From these results, it appears that the problem is not with the code, but with where I browse from. The only failures are from work to computers/servers that I'm not browsing from (not to self).
Could anyone else try these steps with their code problems?
Feb 27, 2007 06:25 PM|jminond|LINK
Fixed, answer found here:
<%@ Page Language="C#" EnableEventValidation="true" %>
Feb 27, 2007 06:30 PM|jminond|LINK
This fixed my problems: enableEventValidation="false"
<pages validateRequest="false" enableEventValidation="false">
Mar 02, 2007 02:14 PM|Corgalore|LINK
We've had this same problem on several websites now when using the release version of MS AJAX and the update panel. The problem was only occurring from within the clients network, however.
We were able to narrow it down to the firewall at the clients location; it was stripping any unknown headers from the data sent during an ajax postback. This was a simple checkbox that needed to be unchecked for our situations and all was fixed.
updatepanel ajax
Mar 09, 2007 10:24 PM|ddanz|LINK
Thank you ! Thank you !
I, too, had a roleManager configured.
Changing to cacheRolesInCookie fixed my problems, which showed up with the RTM 1.0 Ajax release... and did not consistently fail (not sure why not).
Mar 15, 2007 10:18 AM|PeterWT|LINK
Are we missing something fundamental here? I too have this problem. In my case there is an underlying error 0x80004005: Session state has created a session id, but cannot save it because the response was already flushed by the application.
I only know this because I used Fiddler to look at the full response. It consisted of my entire aspx page followed by an entire Asp.Net friendly error page. This means that it is Asp.Net that is doing the illegal additional Response.Write!
We either need Asp.Net fixing so that it does not send out these dual streams or we need Ajax fixing so that it recognises these error pages and does something more sensible. The reason that everyone above was struggling was because Ajax was destroying the vital error information that Asp.Net was trying to send and replacing it with its own (meaningless) message.
Mar 22, 2007 12:44 PM|bpratt|LINK
I am new to ASP.NET and AJAX but I have put together a fairly interesting site using ASP.NET master/content pages and AJAX for UpdatePanels and other controls out of the toolkit. Everything was working great until I added a feature that allows the user to upload a file to the web server. I included a FileUpload control on an AJAX modal dialog and I save the specified file to a directory on the web server when the user presses the OK button on the modal dialog. The operation is successful but any subsequent postback gives me the PageRequestManagerParserErrorException. I know the problem has to do with writing the file out because if I comment out that one line the problem goes away and the interface works as expected.
I've gone through the posts and tried disabling trace, disabling enableeventvalidation, etc. and nothing seems to work. Any ideas? Thanks in advance.
"AJAX Toolkit" "ModalPopUpExtender" AJAX PageRequestManagerParserErrorException
Mar 22, 2007 12:56 PM|jminond|LINK
First: accorting to
Controls that Are Not Compatible with UpdatePanel Controls
To use a FileUpload control inside an UpdatePanel control, set the postback control that submits the file to be a PostBackTrigger control for the panel.
I have found that eventvalidation masks the issue, does not actually correct it.
Upon further investigation, I found that in our case, what was happening is we have httpcompression (GZIP) turned on programatically for some pages.
Ajax.Net did not appreciate GZIP Compression in some pages, I guess there are certain character combos that bug it, or perhaps when i switch to compression one of the headers in the request is modified (although the compression is on for non postback and postback requests, so I am not sure why one header may be different ).
Mar 22, 2007 02:14 PM|bpratt|LINK
<asp:LinkButton <asp:Panel <asp:Panel <asp:Label</asp:Label> <br /> <br /> <asp:FileUpload <br /> <br /> <div style="text-align: center"> <asp:Button <asp:Button </div> </asp:Panel> </asp:Panel> <ajaxToolkit:ModalPopupExtender
Mar 22, 2007 02:25 PM|jminond|LINK
I found this for you:
Scroll down to : Example 3: A Failed File Upload Example
You may want to email Brian since he seems to have more specific experience with the Popup Extender you are using (I have never used it), along with the FileUpload and Extender on the same page. HIs sample even looks alot like yours.
<asp:TextBox</asp:TextBox><br /> <asp:Panel <asp:UpdatePanel <ContentTemplate> Enter the file to upload: <asp:FileUpload <asp:LinkButtonUpload</asp:LinkButton> </ContentTemplate> </asp:UpdatePanel> </asp:Panel> <ajaxtoolkit:PopupControlExtender </ajaxtoolkit:PopupControlExtender>
Mar 22, 2007 06:15 PM|dotnstuff|LINK
Did you ever found resolution. I have somewhat similar problem. My app works fine when accessed it ovet http. It screws up when I try to access it over HTTPS (secure servers). Your reply is greatly appreciated.
Mar 22, 2007 08:32 PM|bpratt|LINK
Not yet. I was able to confirm that is has nothing to do with the FileUpload control. Because the error only occurs when I save the file to the web server, I experimented with writing some random bytes to a temporary file on the web server in Page_Load - same result. The file is saved successfully and the page displays properly but any subsequent postback causes the error.
At this point I'm guessing the fix is in one of three places: the web.config, the IIS settings, or in an updated AJAX library.
Mar 23, 2007 03:07 PM|dotnstuff|LINK
I have exect dame issue in production. Not in development environment though. I have been testing my app with various scenario with no conclusive result. Please let me know if you found resolution to this issue.
Thanks
Jul 08, 2007 09:29 AM|gwpreston|LINK
I have the same problem with the custom membership and role provider (MySQL) that I found on codeproject.com. This problem is more like a random problem that some of my users have told me about. I have tried out the few things that was say on this thread and I am still in the testing staging to find out it works by turning off the EnableEventValidation and cacheRolesInCookie to false. Does anyone know if/when Microsoft will fix this problem so we don’t have to turn these features off?
Thanks
Jul 08, 2007 07:05 PM|degt|LINK
I am not even using membership yet this problem appears with the AJAX RTM. Can't they just fix this once and for all? here is my original post:
I have two forms, the first for search and displaying an intermediate result with a list of possible answers, and the 2nd to display the ultimate results.
Once we have all the data and are ready to display the 1st transfers control to the 2nd. I tried with both passing the parameter in the Context.Items collection as well as in the QueryString but in both cases the same problem ocurrs.
The problem is not retrieving the value, that is fine regardless of the method (Context/QueryString) but that once the 2nd form finishes doing base.PreRender it craps out with:
Sys.WebForms.PageRequestManagerParserErrorException: The message recevied from the serer could not be parsed. Common cuases for this error are when the response is modified by calls to Response.Write(), response filters, HttpModules, or server trace is enabled.
Here is my setup:
Form 1: Search
In the search form I have an update panel with various controls, all of them (dropdowns, textbox, button, gridview) within the UpdatePanel. When the user clicks on the GridView row's Select link an URL is prepared and a Server.Transfer to form 2 is done.
There are no custom controls here. Parameter is passed either in Context or QueryString. That is fine too.
Form 2: Show
Here I am supposed to display the results as I do not want to clutter the search form with other stuff (made that mistake in the original implementation). In this 2nd form I have a prerender method that sets up some data needed by a custom web control that requires some stuff do be done during PreRender.
The custom control renders its own HTML using the Render method but it does not use Response.Write, instead it uses the HtmlTextWriter passed to it.
So, I looked at the error and all of the given causes are out of the question, at least for the things I do so I guess it is an AJAX thing. I don't use Response.Write, trace is disabled, not using HttpModules or even response filters.
Anybody knows how to get rid of this error? this is already the 2nd time I rewrite this form and need to move on.
ajax
Jul 08, 2007 09:13 PM|degt|LINK.
All-Star
48393 Points
Jul 09, 2007 03:45 AM|chetan.sarode|LINK
Thats nice buddy...[:)]
Oct 03, 2007 08:58 AM|Coool|LINK
Ok i have tried all of the solutions mentioned in this thread.
One of them had decrease the possibility of error in my application. Still it's not removed completely.
One solution i can assume is to change the alert msg to cop up with the problem... [8-|] But it still give error msg. May be js is already embaded in dll. If it's possible then how to solve it.
<Installation Dir>\ASP.NET 2.0 AJAX Extensions\v1.0.61025\MicrosoftAjaxLibrary\System.Web.Extensions\1.0.61025.0\MicrosoftAjaxWebForms.debug.js
One more thing i want to ask is I had never find this problem in IE. This problem only occured in Firefox in my app. So, is that the rendering method that prevent IE from generating error? Remeber, i m not using any kind of Rolemanager setting in my config. Then is it possible to solve this using Adaptive Rendering. (Broswercaps).
Suggestions are most appriciated.
Thanks in Advance.
None
0 Points
Oct 29, 2007 11:38 AM|sivakrishna_kalvagadda@satyam.com|LINK
I do have one reason why this warning would come up, In my scenario we maintain session I clicked on sort column of grid which would send the Ajax request then it is throwing that exception. Let me be more clear we are trying to hit the page after the session expire.
Hoping this would be the scenario in your condition also!.
"ajax update panel trigger"
Member
12 Points
Nov 29, 2007 08:34 PM|tripnotic80|LINK
Check the triggers for the updatepanel. For example, I have two update panel, one for a gridview and one for a modalpopup. The updatepanel for the modalpopup has triggers for the gridview events. When I took it out, I don't recieve the errors anymore. I don't understand why that is the reason yet...
None
0 Points
Dec 10, 2007 03:18 PM|darbuthnot|LINK
degt..
- Disabling EventValidation on a page does not solve the problem (for me)
- Application_Error method, badly documented and on my setup I don't even get a breakpoint there when the exception occurrs
- Nothing to do with role management in my solution
- Using EnablePartialRendering="false" in the page's ScriptManager solved my problem!.
I followed the same process as above and only the last option, turning off partial rendering worked for me as well. But since this was the only reason I was using AJAX (a simple update panel to prevent entire page rendering), it seem kind of pointless to add AJAX at all.
For me, the problem only started to show up once I moved my app to a web farm.
I wish the AJAX team would come up with a solution once and for all.
da
Feb 14, 2008 03:03 PM|CTGuy67|LINK
This is ridiculous. I'm having the same issue everyone else is. Works fine in my development environment, but not on my live server, that is unless I run the web app from a browser on the server then everything works fine. I'm not using any kind of authentication or role management and it's not a web farm. Just using Ajax with EnablePartialRendering="true". Which like the above poster said is the primary reason to use Ajax. Here's another wrinkle, this doesn't happen with all the Ajax enabled web apps on the live server...just a select few.
I am at my wits end with this!
Member
23 Points
May 07, 2008 10:18 PM|BhaveshPatel|LINK
EnableEventValidation="false" solved my problem.
None
0 Points
May 27, 2008 08:42 AM|RajkumarGS|LINK
<div>Add this code after closing ContentTemplate.</div> <div> </ContentTemplate> <Triggers> <asp:PostBackTrigger </Triggers> </asp:UpdatePanel><div>Add this code after closing ContentTemplate.</div> <div> </ContentTemplate> <Triggers> <asp:PostBackTrigger </Triggers> </asp:UpdatePanel>
ControlID should be the button or any control you click.
</div></div>
Jul 18, 2008 07:49 AM|Sans|LINK
This might help:
I got the same error while using an update panel on an ajax-enabled website. I didn't have login or any of the causes enumerated in the error message.
Contributor
7303 Points
Jul 18, 2008 12:12 PM|tehremo|LINK
For those who experience this with an expired session:
I just went through this and this worked for me. Add the following to your web.config file:
<httpHandlers>
<add verb="GET" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler" validate="false"/>
</httpHandlers>
<httpModules>
<add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
</httpModules>
Once I added this, my redirect on expired sessions started working again. I could still see the js error happening before the redirect, but for me this error is currently only happening when the session expires.
Oct 13, 2008 06:39 PM|larryw|LINK
Long story short, I was receiving this error after modifiying our site for URL rewriting. Adding the following JavaScript solved it for me:
Sys.WebForms.PageRequestManager.getInstance().add_endRequest(EndRequestHandler);
function EndRequestHandler(sender, args)
{
if (args.get_error() != undefined)
{
args.set_errorHandled(true);location.reload(true);
}
}
You can find out more about this error on the following link:
"AJAX Toolkit" "Ajax Control Toolkit" "AJAX .NET 2.0" "Sys.WebForms.PageRequestManagerServerErrorException:500" "Error handling" client "asp.net ajax"
None
0 Points
Jan 17, 2009 04:00 PM|ricaforrica|LINK
Hello CTGuy67!
Have you ever solved your problem concerning this Sys.WebForms.PageRequestManagerParserErrorException?
I have the exact same symptoms: all works well in my development server, but when I publish my web site to the production server, I get this error in a specific page (only in firefox).
It happens either in publish mode or copying all the code, and the strangest thing is that if use firefox inside the server I don't get any error!
I thank you in advance for your help.
Best regards,
Ricardo.
None
0 Points
Jan 17, 2009 04:35 PM|ricaforrica|LINK
Hello Larryw!
Thanks for your tip! I added the javascript you suggested and the error went away, all started working as expected...
Unfortunately, only if the site is hosted in my development server, if i publish it to the production server I still get these errors (in a random frequency) :(
I suppose it has nothing to do with the client-side or with the database, so I suppose it may be related to IIS. Any configuration I could change in IIS 7 so that it works also in my development machine?
Thanks in advance for your help!
Ricardo.
Jan 19, 2009 11:42 AM|CTGuy67|LINK
Nope. I decide to just roll over and play dead. Any server that Ajax doesn't work right, I just turn it off.
Someday somewhere someone smarter than I will solve this.
Good luck...sorry I couldn't be more help.
Member
4 Points
Feb 06, 2009 11:10 AM|raviraj_shelke|LINK
I have the same problem.
When session expires and page get redirected to login page , them PageRequestManager/script manager can not parse the responce which contain <html> tag.
I can not remove burron outside of the update panel as it will always post back the page and then their is no use of update panel.
Is any one can help or can PageRequestManager can be updated to handel such issues so thet it will simply redirect the browser to new page.
"AJAX .NET 2.0"
Apr 03, 2009 09:16 AM|srankamal|LINK
hello all i am also facing same error ;and i didt use response.write() in my whole application;please suggest me something to avoid this error error sys.WebForms.PageRequestManagerParserErrorException: The message received from the server could not be parsed. Common causes for this error are when the response is modified by calls to Response.Write(), response filters, HttpModules, or server trace is enabled.
None
0 Points
Jul 27, 2009 02:47 AM|Terry_n_Harris|LINK
I HAVE SOLVED IT!!!
I had this issue for a while i couldnt download anything from my webap as i kept getting this error, then i came across this brillian tpiece of code and everything works fine
<code>
</code>
Member
1 Points
Oct 30, 2009 07:48 AM|sandeep.cec|LINK
I am firing master page button click event using jquery .Button is In Update Panel .Locally my code is running properly But at live server It returns 500 status code error.Complete error is given below.
<asp:UpdatePanel
<ContentTemplate> my button is here
</ContentTemplate> </asp:UpdatePanel>
i have used follows methods to solve issue but of no use
1. validateRequest=false, EnableEventValidation="false",
full error message:
: "JS frame :: chrome://firebug/content/spy.js :: onHTTPSpyReadyStateChange :: line 497" data: no]
Nov 25, 2009 12:35 AM|fabiotib|LINK
Hi,
The error occurs because the control that executed the Server.Transfer () method is declared within one or more UpdatePanel as AsyncPostBackTrigger. So, removing this condition the problem stops occurring.
Member
4 Points
Feb 12, 2010 01:31 PM|manowar83|LINK
Thank you so much, Larry!
Putting the following code:
<script type="text/javascript" language="javascript">
Sys.WebForms.PageRequestManager.getInstance().add_endRequest(EndRequestHandler);
function EndRequestHandler(sender, args)
{
if (args.get_error() != undefined)
{
args.set_errorHandled(true);
}
}
</script>
just after ScriptManager tag solved my issue.
I've been trying to fix it about 3 hours and in the edn I saw your post.
Thank you again :)
Manuel
Jun 19, 2010 11:15 PM|dvdgzzrll|LINK
I have this same error. I've built a simple test page at
Ajax works fine when I test on local browser, but when I upload it to my server it doesn't work and throws the error.
Ajax on my other websites work just fine.
Please help.
Jun 20, 2010 09:03 PM|dvdgzzrll|LINK
have you upgraded to IIS 7 ?
None
0 Points
None
0 Points
May 10, 2011 12:10 PM|annamalai|LINK
you must use "Triggers":
after </ContentTemplate> ,before </UpdatePanel> write this code:
<Triggers> <asp:PostBackTrigger </Triggers>
that ControlID reffer to id of the button you want click on it and get this error
May 31, 2012 07:28 PM|Reyu99|LINK
I got this error when I clicked on a Button in Gridview ( and that GridView is in UpdatePanel).... After debugging the code I found out that this error is misleading... The actual error I got in my code is A potentially dangerous Request.Form value was detected from the client
A textboxes on my page has an XML as text value and that is causing the issue... when I removed the XML value it worked fine for me
( I saw the acutal error in the EventArgs parameter in the below line of code
_onFormSubmitCompleted: function PageRequestManager$_onFormSubmitCompleted(sender, eventArgs) {
Hope this helps to someone...
90 replies
Last post May 31, 2012 07:28 PM by Reyu99 | http://forums.asp.net/t/1066976.aspx?How+to+resolve+quot+Sys+WebForms+PageRequestManagerParserErrorException+quot+issue | CC-MAIN-2015-22 | refinedweb | 6,165 | 64.91 |
Package akka.actor.typed.scaladsl
Class Routers
- java.lang.Object
- akka.actor.typed.scaladsl.Routers
public class Routers extends java.lang.ObjectA router that will keep track of the available routees registered to the
Receptionistand route over those by random selection.
In a clustered app this means the routees could live on any node in the cluster. The current impl does not try to avoid sending messages to unreachable cluster nodes.
Note that there is a delay between a routee stopping and this being detected by the receptionist and another before the group detects this. Because of this it is best to unregister routees from the receptionist and not stop until the deregistration is complete to minimize the risk of lost messages.
Method Detail
group
public static <T> GroupRouter<T> group(ServiceKey<T> key)
pool
public static <T> PoolRouter<T> pool(int poolSize, Behavior<T> behavior)Spawn
poolSizechildren with the given
behaviorand forward messages to them using round robin. If a child is stopped it is removed from the pool. To have children restart on failure, use supervision. When all children are stopped the pool stops itself. To stop the pool from the outside, use
ActorContext.stopfrom the parent actor.
Note that if a child stops, there is a slight chance that messages still get delivered to it, and get lost, before the pool sees that the child stopped. Therefore it is best to _not_ stop children arbitrarily.
- Parameters:
poolSize- (undocumented)
behavior- (undocumented)
- Returns:
- (undocumented) | https://doc.akka.io/japi/akka/2.5/akka/actor/typed/scaladsl/Routers.html | CC-MAIN-2020-05 | refinedweb | 245 | 57.16 |
[ aws . configservice ]
Returns the resource types, the number of each resource type, and the total number of resources that AWS Config is recording in this region for your AWS account.
Example.
Note
If you make a call to the get-discovered-resource-counts action, you might not immediately receive resource counts in the following situations:
It might take a few minutes for AWS Config to record and count your resources. Wait a few minutes and then retry the get-discovered-resource-counts action.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
get-discovered-resource-counts [--resource-types <value>] [--limit <value>] [--next-token <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--resource-types (list)
The comma-separated list that specifies the resource types that you want AWS Config to return (for example, "AWS::EC2::Instance" , "AWS::IAM::User" ).
If a value for resourceTypes is not specified, AWS Config returns all resource types that AWS Config is recording in the region for your account.
Note
If the configuration recorder is turned off, AWS Config returns an empty list of ResourceCount objects. If the configuration recorder is not recording a specific resource type (for example, S3 buckets), that resource type is not returned in the list of ResourceCount objects.
Syntax:
"string" "string" ...
--limit (integer)
The maximum number of ResourceCount objects returned on each page. The default is 100. You cannot specify a number greater than 100. If you specify 0, AWS Config uses the default.
--next-token (string)
The nextToken string returned on a previous page that you use to get.
totalDiscoveredResources -> (long)
The total number of resources that AWS Config is recording in the region for your account. If you specify resource types in the request, AWS Config returns only the total number of resources for those resource types.Example
- AWS Config is recording three resource types in the US East (Ohio) Region for your account: 25 EC2 instances, 20 IAM users, and 15 S3 buckets, for a total of 60 resources.
- You make a call to the get-discovered-resource-counts action and specify the resource type, "AWS::EC2::Instances" , in the request.
- AWS Config returns 25 for totalDiscoveredResources .
resourceCounts -> (list)
The list of ResourceCount objects. Each object is listed in descending order by the number of resources.
(structure)
An object that contains the resource type and the number of resources.
resourceType -> (string)The resource type (for example, "AWS::EC2::Instance" ).
count -> (long)The number of resources.
nextToken -> (string)
The string that you use in a subsequent request to get the next page of results in a paginated response. | https://docs.aws.amazon.com/cli/latest/reference/configservice/get-discovered-resource-counts.html | CC-MAIN-2018-26 | refinedweb | 434 | 55.13 |
Red Hat Bugzilla – Bug 89460
Enhancement Request: MySQL v4 RPM (at least a beta rpm)
Last modified: 2007-04-18 12:53:15 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.0.1) Gecko/20021003
Description of problem:
There are many of us who would like to begin using MySQL v4. V4 has many
improvements of which we wish to take advantage. MySQL has now moved all
production recommendations to v4. Some of us have been installing MySQL v4 from
binary and just stomping on whatever but this is not the approach that we wish
to take. We have also tried installing the MySQL v4 RPMs from but
these have namespace conflicts with how Redhat has named its MySQL packages
(MySQL v. mysql - why?) and dependency problems like libmysqlclient.so.10.
I would like to request for Redhat to put out at least a *beta* rpm of MySQL
v4 (probably 4.0.12 now) that we could begin to use for testing migration
efforts to MySQL v4. Redhat would certainly get some valuable feedback from
those of us who were willing to test MySQL v4.
thx,
Gerry Reno
Version-Release number of selected component (if applicable):
mysql-3.23.54a-11
How reproducible:
Always
Steps to Reproduce:
1.read description
2.
3.
Additional info:
Database Server
Production: 4.0.12
Alpha: 4.1.0
Recent: 3.23.56
<snip>
On Wednesday 23 April 2003 19:56, xxx wrote:
> Just curious, do anyone know when the 4.0.13 will be released? Thank you.
Probably, next week.
Egor Egorov
Egor.Egorov@ensita.net MySQL AB
</snip>
An idea to consider might be implementing a switch for MySQL for V3 and V4
similar to the MTA switch for sendmail and postfix. That way you could provide
both V3 and V4 RPM versions of MySQL and then select through /etc/alternatives
which version was active. This would make it much easier for those who would
like to migrate from V3 to V4.
thx,
Gerry Reno
Yes, this is something we are looking at. If it were only a matter of packaging
up mysql proper, this would not be a problem. However, we have to consider the
other packages that relate to mysql as well as migration of existing mysql
installations.
Yes, I understand the problem is not trivial. It's more than packaging up a
mysql RPM. Heck, even I can do that. The dependent packages are definitely the
tricky part - I know; I've tried doing this. Nonetheless, we do need a way to
migrate to V4 and I think that a switch mechanism makes sense. It would support
those that still want to remain on V3 while providing the ability to use V4 for
those who have migrated. Packages that are dependent on MySQL probably aren't
using functionality that is specific to any MySQL version and so as long as they
are not relying upon anything that is version specific they should be ok. And
if they are then of course they would need to be visited. It may also be
necessary to develop a set of migration scripts.
Whatever mechanism is developed it should also be flexible enough to also
account for future versions of MySQL such as v5 which will include stored
procedures and many will certainly want to have access to this version when it
is finally released.
thx,
Gerry Reno
Database Server
Production: 4.0.13
Alpha: 4.1.0
Recent: 3.23.56
lenz@mysql.com 2003-05-20
<snip>
MySQL 4.0.13, a new version of the popular Open Source Database, has been
released. It is now available in source and binary form for a number of
platforms from our download pages at and
mirror sites.
Ok... I've packaged up MySQL 4.0.13 and it should be in Rawhide tomorrow.
There was a shared library version switch between mysql 3.23.56 (current
supported version) and 4.0.13 (rawhide version).
mysql 3.23.56 provides libmysqlclient{_r}.so.10
mysql 4.0.13 provides libmysqlclient{_r}.so.12
The backlevel shared libraries are in a new package, mysqlclient10, which
will be avaiable in Rawhide at the same time. | https://bugzilla.redhat.com/show_bug.cgi?id=89460 | CC-MAIN-2017-04 | refinedweb | 707 | 75.91 |
Solution for Programmming Exercise 9.2
This page contains a sample solution to one of the exercises from Introduction to Programming Using Java..
In my solution to Exercise 7.6, words are added to an arraylist in the order in which they are encountered. After the file has been completely read, the arraylist is sorted into alphabetical order before the list of words is printed. Since a binary sort tree is designed to store words in alphabetical order at all times, there is no need for sorting. At the end of the program, an inorder traversal of the tree can be used to output the words to the file. Using an inorder traversal guarantees that the words will be output in increasing order.
For my solution to this exercise, I copied the routines treeInsert, treeContains, and countNodes from SortTreeDemo.java. I also copied the declaration of root as a static member variable, since that's the variable that represents the tree itself. (It's unfortunate that root has to be a global variable rather than a local variable in main(), but it's used as a global variable in the treeInsert routine. A better solution to the exercise would define a BinarySortTree class to encapsulate the data and routines needed to represent the tree and to use a variable of type BinarySortTree in the program.)
Only a few changes are needed in the main() routine of the original program. They are shown in red in the solution shown below. All-in-all, the substitution of the binary tree for the arraylist is very straightforward.
/** * Makes an alphabetical list of all the words in a file selected * by the user. The list can be written to a file. * The words are stored in a binary sort tree. */ public class ListAllWordsFromFileWithTree { private static TreeNode root; // Pointer to the root node in a binary tree. // This tree is used in this program as a // binary sort tree. When the tree is empty, // root is null (as it is initially). public static void main(String[] args) { System.out.println("\n\nThis program will ask you to select an input file"); System.out.println("It will read that file and make an alphabetical"); System.out.println("list of all the words in the file. After reading"); System.out.println("the file, the program asks you to select an output"); System.out.println("file. If you select a file, the list of words will"); System.out.println("be written to that file; if you cancel, the list"); System.out.println("be written to standard output. All words are converted"); System.out.println("lower case, and duplicates are eliminated from the list.\n\n"); System.out.print("Press return to begin."); TextIO.getln(); // Wait for user to press return. try { if (TextIO.readUserSelectedFile() == false) { System.out.println("No input file selected. Exiting."); System.exit(1); } // ArrayList<String> wordList = new ArrayList<String>(); DELETED LINE String word = readNextWord(); while (word != null) { word = word.toLowerCase(); // convert word to lower case if ( treeContains(root,word) == false ) { // This is a new word, so add it to the list treeInsert(word); } word = readNextWord(); } int wordsInTree = countNodes(root); System.out.println("Number of different words found in file: " + wordsInTree); System.out.println(); if (wordsInTree == 0) { System.out.println("No words found in file."); System.out.println("Exiting without saving data."); System.exit(0); } // selectionSort(wordList); DELETED LINE TextIO.writeUserSelectedFile(); // If user cancels, output automatically // goes to standard output. TextIO.putln(wordsInTree + " words found in file:\n"); treeList(root); System.out.println("\n\nDone.\n\n"); } catch (Exception e) { System.out.println("Sorry, an error has occurred."); System.out.println("Error Message: " + e.getMessage()); } System.exit(0); // Might be necessary, because of use of file dialogs. } /** * Read the next word from TextIO, if there is one. First, skip past * and)) { TextIO.getAnyChar(); // Read the character. ch = TextIO.peek(); // Look at the next character. } if (ch == TextIO.EOF) // Encountered end-of-file return null; // At this point, we know that the next character, so read a word. bread out of the loop. break; } // If we haven't broken out of the loop, next char is a letter. } return word; // Return the word that has been read. } //------------- Binary Sort Tree data structures and methods ------------------ //------------- (Copied from SortTreeDemo.java) ------------------------------- /** * An object of type TreeNode represents one node in a binary tree of strings. */ private static class TreeNode { String item; // The data in this node. TreeNode left; // Pointer to left subtree. TreeNode right; // Pointer to right subtree. TreeNode(String str) { // Constructor. Make a node containing the specified string. // Note that left and right pointers are initially null. item = str; } } // end nested class TreeNode /** *.) */ private static void treeInsert(String newItem) { if ( root == null ) { // The tree is empty. Set root to point to a new node containing // the new item. This becomes the only node in the tree. new() /** * Return true if item is one of the items in the binary * sort tree to which root points. Return false if not. */ static boolean treeContains( TreeNode root, String item ) { if ( root == null ) { // Tree is empty, so it certainly doesn't contain item. return false; } else if ( item.equals(root.item) ) { // Yes, the item has been found in the root node. return true; } else if ( item.compareTo(root.item) < 0 ) { // If the item occurs, it must be in the left subtree. return treeContains( root.left, item ); } else { // If the item occurs, it must be in the right subtree. return treeContains( root.right, item ); } } // end treeContains() /** * Print the items in the tree in postorder, one item to a line. * Since the tree is a sort tree, the output will be in increasing order. */ private static void treeList(TreeNode node) { if ( node != null ) { treeList(node.left); // Print items in left subtree. TextIO.putln(" " + node.item); // Print item in the node. treeList(node.right); // Print items in the right subtree. } } // end treeList() /** * Count the nodes in the binary tree. * @param node A pointer to the root of the tree. A null value indicates * an empty tree * @return the number of nodes in the tree to which node points. For an * empty tree, the value is zero. */ private static int countNodes(TreeNode node) { if ( node == null ) { // Tree is empty, so it contains no nodes. return 0; } else { // Add up the root node and the nodes in its two subtrees. int leftCount = countNodes( node.left ); int rightCount = countNodes( node.right ); return 1 + leftCount + rightCount; } } // end countNodes() } | http://math.hws.edu/javanotes/c9/ex2-ans.html | crawl-001 | refinedweb | 1,068 | 69.99 |
The inner product is just like same as the dot product. You can multiply only finite vectors in dot product but in the case of inner product, you can multiple infinite vectors. In NumPy, there are many functions to manipulate the NumPy array. The function numpy.inner() calculate the inner product of two vectors in space. In this tutorial, you will know how to find the inner product in python using NumPy with various examples.
Examples of Numpy inner Product
Example 1: Numpy inner product on two vectors in one dimension
Let’s create two vectors of a single dimension. You can create a NumPy array using the numpy.array() method. Let’s create it.
array1 = np.array([10,20,30]) array2= np.array([2,3,4])
After the creation, you have to pass it as an argument inside the numpy.array() method. Execute the below lines of code.
import numpy as np array1 = np.array([10,20,30]) array2= np.array([2,3,4]) print(np.inner(array1,array2))
Output
You can see the there is scalar output after doing the inner product on two single dimension vectors.
Example 2: Inner product on two vectors in Multi dimension
Now let’s find the inner product on two multi-dimensional arrays. The first one will be a two-dimensional array and the second one is a single dimensional array. Let’s create both of them.
array1 = np.array([[10,20,30],[40,50,60],[70,80,90]]) array2= np.array([2,3,4]
Execute the below code to find the inner product.
import numpy as np array1 = np.array([[10,20,30],[40,50,60],[70,80,90]]) array2= np.array([2,3,4]) print(np.inner(array1,array2))
Output
The output can be a single or multi-dimensional array that will depend on input values.
Example 3: Inner product on one multi-dimensional array and a scalar value
In the last example, let’s find the inner product on a two-dimensional array and a scalar value. Let’s create a two-dimensional array.
array1 = np.array([[10,20,30],[40,50,60]])
After creating it, now I want to perform the inner product of the above array with 11 as a scalar value.
Execute the below lines of code.
import numpy as np array1 = np.array([[10,20,30],[40,50,60]]) print(np.inner(array1,11))
Output
The output array will be of the same shape as the input multi-dimensional array.
That’s all for now. These are examples of how to calculate the inner product on two vectors. Hope these examples have cleared your all queries on it. Even if you have any doubt then you can contact us for more help.
Source:
Numpy Inner Product Documentation
Join our list
Subscribe to our mailing list and get interesting stuff and updates to your email inbox. | https://www.datasciencelearner.com/calculate-numpy-inner-product-in-python/ | CC-MAIN-2021-39 | refinedweb | 480 | 59.7 |
I was missing a command to erase only blank chars (\n, \r, \r and nbsp).
so I did a simple plugin and assigned to ctrl+shift+delete and ctrl+shift+backspace
- Code: Select all
import sublime, sublimeplugin, re
class RightEraseBlankCharsCommand(sublimeplugin.TextCommand):
def run(self, view, args):
pt = re.compile(r"[ \n\r\t]");
sz = view.size();
for region in view.sel():
p = region.begin()
while pt.match(view.substr(p)) and p < sz :
view.erase(sublime.Region(p,p+1))
class LeftEraseBlankCharsCommand(sublimeplugin.TextCommand):
def run(self, view, args):
pt = re.compile(r"[ \n\r\t]");
sz = view.size();
for region in view.sel():
p = region.end()-1
while p > 1 and pt.match(view.substr(p)) :
view.erase(sublime.Region(p,p+1))
p -= 1
I couldn't think of anything better, but it works
if you can, please help improving it
I hope it's useful, at least it is for me
bye | http://www.sublimetext.com/forum/viewtopic.php?p=832 | CC-MAIN-2014-49 | refinedweb | 156 | 55.61 |
There are a number of implementations of what is commonly referred to as "IMAGE2HTML", but I believe this is the first one you may see that really succeeds in hitting the high-resolution "realistic- looking" mark.
I started with some code that is easy to find, in various forms, and in a variety of programming languages, and then modified it to work as a static class library with a ConvertImage method that accepts a URL to an image on the web, and an integer scale factor, and returns you a long string of HTML that can be added to a web page. In this case I simply assign the result HTML to a Label Control, and I"m done.
The key to the high - resolution part of it is something that a lot of programmers miss - you can only return the HTML equivalent of a pixel by its color. Since Hexadecimal color values give us a very large range that browsers are capable of rendering, that's not the issue. Once you've done that you can either display a dot "." or a number corresponding in some way to the color, but you have still missed the mark - because the default rendering behavior of the browser is to have some spacing between letters, some additional value of spacing between words, and some default further value of spacing between lines.
The net result of all this is that you get your HTML rendering of the image, but it looks crappy and washed out because of all the whitespace between the faux pixels and between each faux "line" of pixels.
So what's the answer? Its CSS! CSS style directives give us extremely fine-grained control over things like letter and word spacing, as well as line spacing. So much so, that if we aren't careful we can make a sentence look like this!
If you get the adjustments right on the CSS letter-spacing, word-spacing and line-height attributes, you can get those "faux pixels" to really pack together and look like an actual photograph. Not only that, but it does not take a lot of extra HTML because you can do this as a tag style applied to a PRE tag at the very beginning, and everything inside your <PRE> </PRE> tags will sport your cool styling! So without further bloviating, let's take a look at the class, examine the code, and look at a live sample!
Here is my "ConvertImage" class:
using System;
using System.Text;
using System.IO;
using System.Web;
using System.Net;
using System.Drawing ;
namespace PAB.Web.Utils
{
public class Image2Html
{
private Image2Html()
{
}
public static string ConvertImage( string imageUrl, int scale)
{
WebClient wc = new WebClient();
byte[] img = wc.DownloadData(imageUrl);
if(img.Length >100000) return "<H1><font color=white>Sorry,Image too big for demo!</font></h1>";
MemoryStream imgStream = new MemoryStream(img);
Bitmap b = (Bitmap)Image.FromStream(imgStream);
MemoryStream ms = new MemoryStream();
StreamWriter SW = new StreamWriter(ms);
SW.WriteLine("<!--%<---Clip Here-->");
SW.WriteLine("<style>pre{letter-spacing:-4px;word-spacing:-4px;line-height:2px}</style>");
SW.WriteLine("<pre><b><font size='1pt'>");
for(int y=0;y<b.Height;y+=scale)
{
for(int x=0;x<b.Width;x+=scale)
{
SW.Write("<font color='#" + b.GetPixel(x,y).Name.Substring(2) + "'>");
SW.Write( ((byte)b.GetPixel(x,y).ToArgb())>>7 );
SW.Write("</font>");
}
SW.WriteLine();
}
SW.WriteLine("</font></b></pre>");
SW.Close();
SW = null;
byte[] b2= ms.ToArray ();
string s = System.Text.Encoding.ASCII.GetString(b2);
return s;
}
}
So how does it work?
We receive an Image URL and an integer "scale factor" as parameters. We use the simplified WebClient class to retrieve the specified image into a byte array. In the case of this demo, if the image is over 100000 bytes I throw it away because I don't want people abusing our bandwidth here with their wise ideas.
We then wrap a new MemoryStream around our image bytes, and we use the convenient FromStream method of the Image class, casting the Image to a Bitmap. Bitmap is the "Lowest common denominator" class that allows us to manipulate pixels, get their colors, x/y coordinates and so on, regardless of the format of the original image.
Next, I create a new MemoryStream and wrap a StreamWriter around it so I can write my HTML. Finally, I scan over all the y-axis pixels and then down the x-axis, getting the pixel color with the GetPixel method. This is then used to create the <font color=#... tag, then again as a number obtained via bitshifting the 32 bit color value so I can have a single-digit number value to write out as my colored HTML. You will note that the styles I referred to earlier are applied to the PRE element in the first line:
SW.WriteLine("<style>pre{letter-spacing:-4px;word-spacing:-4px;line-height:2px}</style>");
Finally when we are done, we get a byte array out of our MemoryStream, convert it to a big string of HTML, and return it to the caller.
The result, which you can view here as well as downloading the complete solution below, looks like this:
View the live demo here
Well! Is it HTML, or is it live? You can View Source on the page to see! Try a scale of 4 or even 3 to see real quality. This works fine in Firefart and Internet Exploder (whichever your poison), although it renders quite a bit faster in IE. I cannot think of much to do with this right now, but I bet other people can, so here is my July 4th 2006 present to you! Have a safe and happy holiday!
N.B. Thanks to Robbe for reminding me that you can get the Color object at the first GetPixel call and then re-use it without having to call GetPixel a second time. The downloadable source has been updated with this change.
Download the Solution that accompanies this article
Articles
Submit Article
Message Board
Software Downloads
Videos
Rant & Rave | http://www.eggheadcafe.com/articles/20060701.asp | crawl-002 | refinedweb | 1,012 | 62.48 |
What are the major features of ES2017 and what's on the list for ES2018? This two-part series explores the latest and greatest features of ECMAScript. Part 1 explores major features like async functions and shared memory and atomics, while Part 2 explores minor features.
Let's check in with ECMA International, Technical Committee 39! It turns out the 6 in ES6 does not stand for the number of years it takes for a release. I kid! Since ES6/ES2015 took so long to release (6 years, hence my jab) the committee decided to move to a yearly small-batch release instead. I'm a big fan of this and I think the momentum keeps things moving and JavaScript improving. What presents did we get for ES2017 and what's on our list for ES2018?
You can learn more about the TC39 process of proposals from 2ality by Dr. Axel Rauschmayer: The TC39 Process for ECMAScript Features.
In January, at the TC39 meeting, the group settled on the ECMAScript proposals that would be slated as the features of ES2017 (also referred to ES8, which probably should be nixed to avoid confusion). This list included:
Major features
Minor features
In this post, the first in a two-part series, we'll cover the major features listed above. You can read the second post to cover the minor features.
Async Functions on GitHub (Proposed by Brian Terlson)
I'm starting here because it was first on the list and my level of excitement is pretty high for this nifty addition. In ES2015 we got promises to help us with the all too familiar condition commonly known as… (are you really going to make me say it?) CALLBACK HELL 😱.
The async/await syntax reads entirely synchronously and was inspired by TJ Holowaychuk's Co package. As a quick overview, async and await keywords allow you to use them and try/catch blocks to make functions behave asynchronously. They work like generators but are not translated to Generator Functions. This is what that looks like:
async
await
try
catch
// Old Promise Town
function fetchThePuppies(puppy) {
return fetch(puppy)
.then(puppyInfo => puppyInfo.text())
.then(text => {
return JSON.parse(text)
})
.catch(err =>
console.log(`Error: ${err.message}`)
)
}
// New Async/Await City
async function fetchThePuppies(puppy) {
try {
let puppyInfo = await fetch(puppy)
let text = await puppyInfo.text()
return JSON.parse(text)
}
catch (err) {
console.log(`Error: ${err.message}`)
}
}
This doesn't mean you should go in and replace all promises in your code with async/await. Just like you didn't go in and replace every function in your code with arrow functions (one hopes), only use this syntax where it works best. I won't go too into detail here because there are tons of articles covering async/await. Check them out (yes, I did add a link of a async/await blog post for each of those last words in the previous sentence, you're welcome 😘). In the upcoming year we will see how people are able to make their code more readable and efficient using async/await.
Shared Memory and Atomics on GitHub (Proposed by Lars T. Hansen)
Wait, did we enter a theoretical physics class? Sounds fun, but no. This ECMAScript proposal joined the ES2017 line up and introduces SharedArrayBuffer and a namespace object Atomics with helper functions. Super high-level (pun intended), this proposal is our next step towards high-level parallelism in JavaScript.
SharedArrayBuffer
Atomics
We're using JavaScript for more and more operations in the browser relying on Just-in-Time compilers and fast CPUs. Unfortunately, as Lars T. Hansen says in his awesome post, A Taste of JavaScript's New Parallel Primitives from May 2016:).
This proposal provides us with the building blocks for multi-core computation to research different approaches to implement higher-level parallel constructs in JavaScript. What might those building blocks be? May I introduce you to SharedArrayBuffer. MDN has a great succinct definition so I'll just plop that in right here:
The SharedArrayBuffer object is used to represent a generic, fixed-length raw binary data buffer, similar to the ArrayBuffer object, but in a way that they can be used to create views on shared memory. Unlike an ArrayBuffer, a SharedArrayBuffer cannot become detached.
ArrayBuffer
I don't know about you but the first time I read that I was like, "wat."
Basically, one of the first ways we were able to run tasks in parallel was with web workers. Since the workers ran in their own global environments they were unable to share, by default, until communication between the workers, or between workers and the main thread, evolved. The SharedArrayBuffer object allows you to share bytes of data between multiple workers and the main thread. Plus, unlike its predecessor ArrayBuffer, the memory represented by SharedArrayBuffer can be referenced from multiple agents (i.e. web workers or the web page's main program) simultaneously. You can do this using postMessage to transfer the SharedArrayBuffer from one of these agents to the another. Put it all together, and what do you got? Transferring data between multiple workers and the main thread using SharedArrayBuffer so that you can execute multiple tasks at once which == parallelism in JavaScript. But wait, there's more!
postMessage
Before we move on it's important to note some current hold-ups for SharedArrayBuffer. If you've been paying attention to the news lately you may be aware of the processor chip security design flaw causing two vulnerabilities: Meltdown and Spectre. Feel free to read up on it but just know that browsers are disabling SharedArrayBuffer until this issue is resolved.
SharedArrayBuffer is being disabled because you can update its values in a worker, in a tight loop. Then, from another thread, that data can be used as a high-resolution timer.
It'll return once the underlying processor issues are patched.
— Jake Archibald (@jaffathecake) January 4, 2018
SharedArrayBuffer is being disabled because you can update its values in a worker, in a tight loop. Then, from another thread, that data can be used as a high-resolution timer.
It'll return once the underlying processor issues are patched.
Okay, the next stop on this parallel train: Atomics, which is a global variable that has two methods. First, let me present you with the problem the Atomics methods solve. When sharing a SharedArrayBuffer betwixt 🎩 agents (as a reminder agents are the web workers or the web page's main program) each of those agents can read and write to its memory at any time. So, how do you keep this sane and organized, making sure each agent knows to wait for another agent to finish writing their data?
Atomics methods wake and load! Agents will "sleep" in the wait queue while waiting for another agent to finish writing their data, so Atomics.wake is a method that lets them know to wake up. When you need to read the data you use Atomics.load to load data from a certain location. The location is based on the methods two parameters: a TypedArray, an array-like mechanism for accessing raw binary data (what SharedArrayBuffer is using), and an index to find the position in that TypedArray. There is more to it than what we've just covered but that's the gist of it.
wake
load
Atomics.wake
Atomics.load
TypedArray
For now, Atomics has only these two methods. Eventually, Hansen (our lovely author of this proposal and explainer of parallel things) says, there should be more methods, like store and compareExchange, to truly implement synchronization. Again, we are at the beginning stages of parallelism in JavaScript and this proposal is providing us with the building blocks to get there.
store
compareExchange
Phew! Although that was quite a lot to think about, that was still a high level overview. This update may not be used by most developers in the next year but will help advance JavaScript to benefit everyone. So, thank your brain for getting you this deep and check out these fantastic resources to dive in more! | https://www.telerik.com/blogs/ecmascript-goodies-check-out-the-es2017-major-features | CC-MAIN-2020-45 | refinedweb | 1,347 | 63.49 |
I want to draw random coloured points on a JPanel in a Java application. Is there any method to create random colours?
Use the random library:
import java.util.Random;
Then create a random generator:
Random rand = new Random();
As colours are separated into red green and blue, you can create a new random colour by creating random primary colours:
// Java 'Color' class takes 3 floats, from 0 to 1. float r = rand.nextFloat(); float g = rand.nextFloat(); float b = rand.nextFloat();
Then to finally create the colour, pass the primary colours into the constructor:
Color randomColor = new Color(r, g, b);
You can also create different random effects using this method, such as creating random colours with more emphasis on certain colours ... pass in less green and blue to produce a "pinker" random colour.
// Will produce a random colour with more red in it (usually "pink-ish") float r = rand.nextFloat(); float g = rand.nextFloat() / 2f; float b = rand.nextFloat() / 2f;
Or to ensure that only "light" colours are generated, you can generate colours that are always > 0.5 of each colour element:
// Will produce only bright / light colours: float r = rand.nextFloat() / 2f + 0.5; float g = rand.nextFloat() / 2f + 0.5; float b = rand.nextFloat() / 2f + 0.5;
There are various other colour functions that can be used with the
Color class, such as making the colour brighter:
randomColor.brighter();
An overview of the
Color class can be read here: | https://codedump.io/share/dnigju3ZiMUA/1/creating-random-colour-in-java | CC-MAIN-2017-09 | refinedweb | 243 | 77.53 |
49481/python-error-importerror-no-module-named-pygame-locals
I'm trying to build some random game on python as apart of my project. It's a very simple game but I keep getting me the following error:
ImportError: No module named pygame.locals
How do I get rid of it?
You need to download and install the pygame module provided by python.
To download and install on Pycharm, follow these steps:
Goto File -> Settings -> click on the tiny plus sign on the right top corner as shown below.
You'll see a search tab, type the module name there. In your case its pygame.
Finally, click on install package and you're good to go!
So Harsha, you Import the module first ...READ MORE
Use the following command to install tkinter ...READ MORE
sudo apt-get install python3-tk
Then,
>> import tkinter # ...READ MORE
Hi @Hannah, replace
import SpeechRecognition as sr
with
import speech_recognition ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
You can simply the built-in function in ...READ MORE
Hi @Vasgi, for the code you're trying ...READ MORE
It is possible that either urllib3 is ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/49481/python-error-importerror-no-module-named-pygame-locals | CC-MAIN-2020-29 | refinedweb | 222 | 78.04 |
I am having trouble using a function statement in order to count vowels in an input statement here is a detailed description on what the program is intended to do:
1) Get line from user (ex. hi, this is a sentence.)
2) Program must recognize every vowel in the input sentence and count.
3) Must be done using some kind of loop and a function statement.
+++(I was thinking of a for loop)++++
4) Must total up every vowel found in the input sentence and out put data
Here is what I have so far, any suggestions will be greatly appreciated. Thank You
#include <iostream> #include <string> using namespace std; char vowels(char a, e, i, o, u); int main() { string input; int i; char ch; cout << "Please enter text: "; getline(cin, input); cout << "You entered: " << input << endl; cout << "The string is " << input.length() << " characters long." << endl; for(i = 0; i < input.length(); i++) cout << input[i]; cout << endl << endl; return 0; } char vowels(char a, e, i, o, u) { char ch; int vowelCounter = 0; if (ch = vowels; vowelCounter++) cout << input[i]; return vowelCounter; } | http://www.dreamincode.net/forums/topic/194820-counting-vowels-from-line-with-a-function-statement-no-pointer/page__p__1139633 | CC-MAIN-2016-07 | refinedweb | 184 | 68.5 |
Created on 2018-12-12 23:08 by vstinner, last changed 2018-12-13 01:15 by vstinner. This issue is now closed.
On a file, "with file:" fails if it's used a second time:
---
fp = open('/etc/issue')
with fp:
print("first")
with fp:
print("second")
---
fails with "ValueError: I/O operation on closed file", because file.__enter__() raises this exception if the file is closed.
I propose to have the same behavior on multiprocessing.Pool.__enter__() to detect when the multiprocessing API is misused.
Anyway, after the first "with pool:" block, the pool becomes unusable to schedule now tasks: apply() raise ValueError("Pool not running") in that case for example.
Currently, the error only occurs when apply() is called:
---
import multiprocessing
def the_test():
pool = multiprocessing.Pool(1)
with pool:
print(pool.apply(int, (2,)))
with pool:
print(pool.apply(int, (3,))) # <-- raise here
the_test()
---
I would prefer to get an error on at the second "with pool:" line.
New changeset 08c2ba0717089662132af69bf5948d82277a8a69 by Victor Stinner in branch 'master':
bpo-35477: multiprocessing.Pool.__enter__() fails if called twice (GH-11134) | https://bugs.python.org/issue35477 | CC-MAIN-2021-21 | refinedweb | 181 | 67.96 |
Although I said I wouldn't summarize, I got
many letters asking me to do so. Since there's
new info beyond the previous summary, here it
is:
Before that: You all who had problems reaching
me, please excuse me. Our mailer here went
mad the day I posted the request, and my
address got messed up. It seems to be working
now.
I got many answers, whith many suggestions, such
as:
- edit xdm and xinit scripts to do that - it only
works at startup, not if a user do that at the
command line (why would he/she do it? well...)
- substitute xterm by a script that gets -C option
out when needed - it works, but it's not very
elegant;
- this is fixed by patch number 100188-02 - probably
the best answer! In stead of fixing xterm only,
it provides a way to avoid anyone to try such a
trick anymore. I haven't got the patch yet, but
I'm trying...
- two answers suggested changes to xterm code to
check if the user already owns the console. It
also disable xterm sessions if the /etc/nologin
file is found. At the end I include one of them
as a context diff, sent by John Hasley <hasle@andy.
bgsu.edu>. I've installed it, and it's working
just fine.
Thanks to:
eric@cc.uq.oz.au (Eric Halil)
hasley@andy.bgsu.edu (John Hasley)
oran@spg.amdahl.com (Oran Davis)
juo@ecto.rutgers.edu (John Oleynick)
ckd@eff.org (Christopher Davis)
rheaton@synaptics.com (Rick Heaton)
--Dorgival
+------------------------------------+-------------------------+
| Dorgival Olavo Guedes Neto | dorgival@dcc.ufmg.br or |
+------------------------------------+-------------------------+
| Systems Analyst - Network Support | dorgival@dccbhz.ufmg.br |
| Dept of Computer Science | FAX: +55(031)443-4352 |
| Federal University of Minas Gerais | PHONE: +55(031)443-4088 |
+------------------------------------+-------------------------+
Here goes the patch I've used:
> Date: Fri, 13 Mar 92 13:03:37 -0500
> From: John Hasley <hasley@andy.bgsu.edu>
> Subject: Re: xterm -C
> OK. Here it is. The changes are pretty straight-forward.
> No guarantees, of course. It's a standard context diff, appliable with patch.
> 1). The part that checks for '/etc/nologin' is within "#ifdef BGSU",
> (which is defined immediately before it, you can just remove the ifdef...endif
> pair, if you prefer.)
> 2). The part that handles 'xterm -C' is within "#ifdef TIOCCONS".
> John Hasley Internet: hasley@bgsu.edu
> University Computer Services UUCP: ...!osu-cis!bgsuvax!hasley
> Bowling Green State University BITNET: hasley@BGSUOPIE
> Bowling Green, OH 43403-0125 MaBell: (419) 372-2102
-----Cut here----
*** main.c Fri Apr 5 09:59:00 1991
--- main.bgsu2.c Fri Apr 5 13:26:10 1991
***************
*** 107,112 ****
--- 107,117 ----
#include <errno.h>
#include <setjmp.h>
+ #ifdef TIOCCONS
+ #include <sys/types.h>
+ #include <sys/stat.h>
+ #endif /* TIOCCONS */
+
#ifdef hpux
#include <sys/utsname.h>
#endif /* hpux */
***************
*** 560,565 ****
--- 565,576 ----
int Xsocket, mode;
char *basename();
int xerror(), xioerror();
+ #define BGSU
+ #ifdef BGSU
+ static char nolog[20] = "/etc/nologin";
+ FILE *nlfd_bgsu;
+ char bgsu_c;
+ #endif /* BGSU */
ProgramName = argv[0];
***************
*** 713,724 ****
if(**argv != '-') Syntax (*argv);
switch(argv[0][1]) {
case 'h':
/* NOTREACHED */
case 'C':
#ifdef TIOCCONS
! Console = TRUE;
#endif /* TIOCCONS */
continue;
case 'S':
--- 724,746 ----
if(**argv != '-') Syntax (*argv);
switch(argv[0][1]) {
+ #ifdef TIOCCONS
+ static char console_path[20] = "/dev/console";
+ struct stat stat_of_console;
+ #endif /* TIOCCONS */
case 'h':
/* NOTREACHED */
case 'C':
#ifdef TIOCCONS
! if (stat(console_path, &stat_of_console))
! perror("error in accessing console");
! else if (getuid() == (int) stat_of_console.st_uid)
! Console = TRUE;
! else {
! fprintf(stderr, "Only use -C from the console\n");
! Console = FALSE;
! }
#endif /* TIOCCONS */
continue;
case 'S':
***************
*** 746,751 ****
--- 768,783 ----
(ButtonPressMask|ButtonReleaseMask),
GrabModeAsync, GrabModeAsync);
+ #ifdef BGSU
+ /* If not super-user, check for logins disabled */
+ if (getuid() != 0 && (nlfd_bgsu = fopen(nolog, "r")) > 0) {
+ while ((bgsu_c = getc(nlfd_bgsu)) != EOF)
+ putchar(bgsu_c);
+ fflush(stdout);
+ sleep(5);
+ exit(0);
+ }
+ #endif /* BGSU */
term = (XtermWidget) XtCreateManagedWidget(
"vt100", xtermWidgetClass, toplevel, NULL, 0);
/* this causes the initialize method to be called */
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:06:38 CDT | http://www.sunmanagers.org/archives/1992/0352.html | CC-MAIN-2014-52 | refinedweb | 660 | 60.31 |
Introduction
Starting with version 4, the CLR supports In-Process Side-by-Side (InProc SxS).
The topic is extensively treated. In particular, this blog post does a very good job in explaining why CLR InProc Sxs is useful, and which scenarios it addresses.
However I could not find any sample code that causes multiple CLRs to be loaded into a process. I needed such a scenario in order to figure out how to do debugging with Windbg and Sos when multiple CLRs are loaded into a process.
So, the first step for me was to create a solution that causes multiple runtimes to be loaded into a process. The following step will then be to do managed debugging in such a process. This is the topic of a post to come, however
. Let’s get started and create the solution then.
Note: there isn’t a 1:1 mapping between the .Net Framework version and the CLR version, because different versions of the .Net Framework may use the same version of the CLR. Also, the name of the CLR dll has changed over different versions. All this is summarized in the following table:
and will be useful in the rest of this post. Only the CLR version is relevant for our purposes, so the “Target Framework” in Visual Studio projects will be just a means to actually set the CLR version. In this respect, multiple options are possible: for instance, .Net Framework 2.0, .Net Framework 3.0 and .Net Framework 3.5 all set the CLR version to 2.0. In the samples, we’ll be using the last .Net Framework version that uses a given CLR version (.Net Framework 3.5 for CLR 2.0, .Net Framework 4.5 for CLR 4.0) at the time of this writing.
Easy approaches (and why they do not work)
Each assembly can reference one CLR, so obviously it’s not possible to load multiple CLRs in one process writing one assembly only,
It is tempting to think, however, that by having different assemblies in one process, each referencing a different version of the CLR, we end up loading multiple CLRs. Let’s try this way then.
CLR2 referencing CLR4
Follow these steps:
- Create a new solution of type “Blank Solution” in Visual Studio 2012
- Add a console application (ConsoleApplication1), with Target Framework version 3.5
- Add a class library (ClassLibrary1), with Target Framework version 4.5
- Add a reference to ClassLibrary1 from ConsoleApplication1
- Reference a type in ClassLibrary1 from ConsoleApplication1
Even before you build this solution, you’ll see this icon on the added reference:
Building gives an error that is pretty self-explanatory:
1:
2:
3: 2>------ Build started: Project: ConsoleApplication1, Configuration: Debug Any CPU ------
4: 2>C:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1578,5): warning MSB3274: The primary reference "D:\CLR2CLR4\ClassLibrary1\bin\Debug\ClassLibrary1.dll" could not be resolved because it was built against the ".NETFramework,Version=v4.5" framework. This is a higher version than the currently targeted framework ".NETFramework,Version=v3.5".
5: 2>C:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(1578,5): warning MSB3258: The primary reference "D:\CLR2CLR4\ClassLibrary1\bin\Debug\ClassLibrary1.
6: 2>D:\SR\CLR2CLR4\ConsoleApplication1\Program.cs(5,7,5,20): error CS0246: The type or namespace name 'ClassLibrary1' could not be found (are you missing a using directive or an assembly reference?)
At startup, CLR2 would be loaded, and the reference to an assembly targeting the CLR4 would force that assembly to be run in a previous version of the CLR. This is not supported.
CLR4 referencing CLR2
Follow these steps (you can reuse the previous solution)
- Add a console application (ConsoleApplication2), with Target Framework version 4.5
- Add a class library (ClassLibrary2), with Target Framework version 3.5
- Add a reference to ClassLibrary2 from ConsoleApplication2
- Reference a type in ClassLibrary2 from ConsoleApplication2. Note that this time the reference is added without any warning
If you run the solution, everything works fine. A closer inspection, however, shows that only CLR4 is loaded into the process (you can attach a debugger to the process, or use Sysinternals’ Process Explroer, and check that clr.dll is loaded into the process, but mscorwks.dll is not).
These two tests have one thing in common: directly referencing types between different versions of the CLR does not cause multiple CLRs to be loaded. This is logical, because a direct reference between types implies that they live in the same AppDomain. Having different CLRs loaded, however, means that these CLRs live side-by-side in the same process, separate from each other. This means that every CLR has its own Garbage Collector, JIT Compiler, set of AppDomains, and in general all the supporting structures are separate. The obvious consequence is that two .Net objects referencing each other cannot live in different CLRs.
A more serious approach (and why it still does not work)
This excellent post has valuable information on how multiple CLRs can be loaded into a process (see the section “Our Solution Is Not…”):
To take advantage of in-proc SxS, a managed component must be activated by a native host and interact with its environment through a native interop layer such as COM interop and P/Invoke.
So we come up with the idea of a managed application targeting a version of the CLR, and a managed assembly, exposed through COM Interoperability, targeting a different version of the CLR. Follow these steps (you can reuse the previous solution):
- Add a console application (ConsoleApplication3), with Target Framework version 3.5
- Add a class library (ClassLibrary3), with Target Framework version 4.5
- In ClassLibrary3, expose a .Net class through COM Interop. This requires using the ComVisible attribute and registering the assembly with COM through RegAsm.
- Build ClassLibrary3. Upon successful build, the regasm step creates the Type Library for use with COM Interop
- Add a COM Reference in ConsoleApplication3 to the Type Library generated by building ClassLibrary3
The last step fails and displays a message box with this content:
1: ---------------------------
2: Microsoft Visual Studio
3: ---------------------------
4: A reference to 'ClassLibrary3' could not be added.
5:
6: The ActiveX type library 'D:\CLR2CLR4\ClassLibrary3\bin\Debug\bin\Debug\ClassLibrary3.tlb' was exported from a .NET assembly and cannot be added as a reference.
7:
8: Add a reference to the .NET assembly instead.
9: ---------------------------
10: OK
11: ---------------------------
Basically, this means that it is not possible to use managed code from managed code through COM Interop.
The final solution
Due to the previous limitation, we have to introduce a native intermediary in the chain of calls.
Follow these steps (you can reuse the previous solution):
- Add an ATL Project (ATLProject1) to the solution
- Add an ATL Simple Object to ATLProject1. This object just forwards calls to the object in ClassLibrary3 exposed through COM Interop
- Build ATLProject1
- Add a COM Reference in ConsoleApplication3 to the Type Library generated by building ATLProject1
Build and run the solution (startup project: ConsoleApplication3). Everything works fine and, if you look at the dlls loaded into the process (again, you can either attach a debugger or use Project Explorer), you’ll see that both mscorwks.dll (CLR2) and clr.dll (CLR4) are loaded into the project.
The final Visual Studio 2012 solution, including 7 projects, is attached to this post. Note that only 3 projects (ConsoleApplication3, ClassLibrary3 and ATLProject1) are needed to have multiple CLRs loaded in a process. The other projects (ConsoleApplication1, ClassLibrary1, ConsoleApplication2, ClassLibrary2) exemplify the non-working approaches.
Note: You would get the same effect (CLR2 and CLR4 loaded) by reversing the CLR versions in the projects. In other words, if ConsoleApplication3 had Target Framework version 4.5, and ClassLibrary3 had Target Framework version 3.5, the end result would still be to have CLR2 and CLR4 loaded in the process, only in the reverse order (CLR4 first, then CLR2).
In the next post, we’ll use this solution to show how managed debugging works with CLR InProc SxS. Stay tuned!
Experts Masters !!!
Hello!
I'm trying to use your approach outlined here to successfully migrate some code to .Net 4.5.1, while providing a bridge that lets some CLR2 clients (namely, some SSIS packages) call the code. I downloaded the sample, but can't seem to get it to run. When I run ConsoleApplication3 I get a 'Class Not Registered' error, as follows:
Unhandled Exception: System.Runtime.InteropServices.COMException (0x80040154): C
lass not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG))
As near as I can tell the class is registered correctly. Any tips on resolving? My apolgies for a noob question, but it's been ages since I had to deal with anything ATL or COM related and I think I suppressed most of those memories.
Hello,
based on past experience, I would think that REGDB_E_CLASSNOTREG can be trusted. If you are on a 64-bit OS, the first thing that I would check is the bitness (32/64) of the compiled COM dll, and the bitness of the .net process (Task Manager shows this information). They have to match, otherwise the COM dll can't be loaded into the process and you get REGDB_E_CLASSNOTREG because the COM runtime looks in the "wrong" view of the registry, where the COM component is not registered. You can easily tell in which view of the registry the component is registered by checking if the path of the key includes Wow6432Node
Thanks. Got it figured out. The issue ended up being that the CLR4 assembly being called wasn't in the bin folder for the Console application.. | https://blogs.msdn.microsoft.com/carlos/2013/08/23/loading-multiple-clr-runtimes-inproc-sxs-sample-code/ | CC-MAIN-2018-26 | refinedweb | 1,604 | 55.34 |
Returns a reference to the first element in a vector.
reference front( );
const_reference front( ) const;
A reference to the first element in the vector object. If the vector is empty, the return is undefined.
If the return value of front is assigned to a const_reference, the vector object cannot be modified. If the return value of front is assigned to a reference, the vector object can be modified.
When compiling with _SECURE_SCL 1, a runtime error will occur if you attempt to access an element in an empty vector. See Checked Iterators for more information.
// vector_front.cpp
// compile with: /EHsc
#include <vector>
#include <iostream>
int main( )
{
using namespace std;
vector <int> v1;
v1.push_back( 10 );
v1.push_back( 11 );
int& i = v1.front( );
const int& ii = v1.front( );
cout << "The first integer of v1 is "<< i << endl;
i++;
cout << "The second integer of v1 is "<< ii << endl;
}
The first integer of v1 is 10
The second integer of v1 is 11
Header: <vector>
Namespace: std | http://msdn.microsoft.com/en-us/library/0z70c7a5.aspx | crawl-002 | refinedweb | 164 | 66.44 |
BounceBox Notification Plugin With jQuery & CSS3
Martin Angelov.
Step 1 – XHTML
Going straight to the point, what do you need to create this effect?
The only thing you need is to create a div on your page and put some content inside it. Something like this:
<div id="box"> <p><b>Title!</b>Boring explanation.</p> </div>
In our example the title of the message, the message body, and the warning icon are all created by using a single <p> tag with some CSS wizardry. The warning icon is its background, and the title is a regular bold tag contained inside the paragraph.
Step 2 – CSS
The plugin, we are doing today, adds its own CSS rules for the box positioning, which make the bounce effect possible, but we still need to code the design of the box in our stylesheet file.
styles.css – Part 1
/* The bouncing box */ #box{ background:url('img/box_bg.jpg') repeat-x center top #fcfcfc; height:115px; padding:20px; margin-top:-10px; padding-top:30px; width:400px; border:1px solid #fcfcfc; color:#494848; text-shadow:1px 1px 0 white; font-family:'Myriad Pro',Arial,Helvetica,sans-serif; } #box p{ font-size:25px; background:url('img/warning.png') no-repeat 10px center; padding-left:90px; } #box p b{ font-size:52px; display:block; } #box, #main, a.button{ -moz-border-radius:10px; -webkit-border-radius:10px; border-radius:10px; }
Here we are styling the design of the bounceBox. There are also a couple of rules that are applied inline by jQuery, which assign a ‘fixed’ positioning to the box and center it in the middle of the page, which is required for the animation. This way there is a clear division between the styles for design and those for functionality.
styles.css – Part 2
/* Styling the big button */ a.button{ color:white; letter-spacing:-2px; padding:20px; display:block; text-shadow:1px 1px 0 #145982; font-family:'Myriad Pro',Arial,Helvetica,sans-serif; font-size:80px; font-weight:bold; text-align:center; width:350px; border:1px solid #60b4e5; margin:60px auto; /* CSS3 gradients for webkit and mozilla browsers, fallback color for the rest: */ background-color: #59aada; background-image: -moz-linear-gradient(#5eb2e2, #4f9cca); background-image: -webkit-gradient(linear, 0% 0%, 0% 100%, from(#5eb2e2), to(#4f9cca)); } a.button:hover{ /* Lighter gradients for the hover effect */ text-decoration:none; background-color: #5eb2e2; background-image: -moz-linear-gradient(#6bbbe9, #57a5d4); background-image: -webkit-gradient(linear, 0% 0%, 0% 100%, from(#6bbbe9), to(#57a5d4)); }
In the second part of the code we apply a number of CSS3 rules to the button to achieve that polished look. Notice the two gradient rules which are targeted at Mozilla Firefox and the Webkit browsers (Safari & Chrome). Unfortunately, unlike with other CSS3 rules, they don’t share a common syntax for displaying a gradient, which raises the burden on the developer in some degree.
It is also important to specify a fallback background color in case the browser does not support CSS gradients.
Step 3 – jQuery
First lets start by creating our bounceBox plugin. As we’ve seen before, creating a jQuery plugin is just a matter of extending the $.fn object with a new function. The ‘this’ of the new function is equivalent to the jQuery set of elements that the method was called on.
bouncebox-plugin/jquery.bouncebox.1.0.js
(function($){ /* The plugin extends the jQuery Core with four methods */ /* Converting an element into a bounce box: */ $.fn.bounceBox = function(){ /* Applying some CSS rules that center the element in the middle of the page and move it above the view area of the browser. */ this.css({ top : -this.outerHeight(), marginLeft : -this.outerWidth()/2, position : 'fixed', left : '50%' }); return this; } /* The boxShow method */ $.fn.bounceBoxShow = function(){ /* Starting a downward animation */ this.stop().animate({top:0},{easing:'easeOutBounce'}); this.data('bounceShown',true); return this; } /* The boxHide method */ $.fn.bounceBoxHide = function(){ /* Starting an upward animation */ this.stop().animate({top:-this.outerHeight()}); this.data('bounceShown',false); return this; } /* And the boxToggle method */ $.fn.bounceBoxToggle = function(){ /* Show or hide the bounceBox depending on the 'bounceShown' data variable */ if(this.data('bounceShown')) this.bounceBoxHide(); else this.bounceBoxShow(); return this; } })(jQuery);
We are defining four separate methods which convert the div to a bounceBox (and apply the positioning CSS rules), show it, hide it or toggle between the two by using the animate() jQuery method.
For the toggling we are keeping an internal variable with the data method, which marks whether the box has been shown or hidden.
All of these methods are available to you after you include the jQuery library and the jquery.bounce.1.0.js files to the page. For the neat bounce effect, you will need the jQuery easing plugin as well, which is included in the plugin directory in the zip.
It is really easy to use the plugin, as you can see from the code below.
script.js
$(document).ready(function(){ /* Converting the #box div into a bounceBox: */ $('#box').bounceBox(); /* Listening for the click event and toggling the box: */ $('a.button').click(function(e){ $('#box').bounceBoxToggle(); e.preventDefault(); }); /* When the box is clicked, hide it: */ $('#box').click(function(){ $('#box').bounceBoxHide(); }); });
The code above is executed when the document ready event is fired so we are sure that all the page elements are available to jQuery. The first thing we then do is to covert the #box div to a bounceBox, and bind listeners to the click event on the button and the box itself.
You can put whatever HTML code you want in the box div and it will be properly converted to a bounceBox. You can also have more than one bounce box on the page in the same time.
With this our BounceBox plugin is complete!
Conclusion
You can use this jQuery plugin to present notifications to the user in an eye-catching manner. You can easily put a registration form, newsletter signup or even some kind of advertisement as the content of the box div. Feel free to experiment and share what you’ve done in the comment section.
Presenting Bootstrap Studio
a revolutionary tool that developers and designers use to create
beautiful interfaces using the Bootstrap Framework.
47 Comments
This is great keep up the great work , hope u can finish soon zinescripts !
Great demo, your work speaks volumes.
Very nice. I could see myself using this for a contact form.
great, tanks for script
Awesome tut. i love it.
10 how would i use ajax to have it drop down automatically for there is a change in mysql database.
Use the value of return from ajax to call a function and trigger a click, so that the button will be called automatically
This is a nice little function which can be used for numerous things.
A little tweaking and it could be used as an auto-dropdown notice on forums or community sites telling people what possibilities they get if they sign up.
It can be used to show error messages on forms if a person forgot to fill out some form fields.
Yet another great and useful tutorial.
Thanks for your great work Martin ^_^
Thanks for the tut.
This will be excellent for an April Fools joke page :). Ideas, ideas.
Thanks.
Hey,
you can gradient image ;)
Replace it with:
background-image: -moz-linear-gradient(top, #F6F6F6, #FEFEFE); /* FF3.6 */
background-image: -webkit-gradient(linear,left top,left bottom,color-stop(0, #F6F6F6),color-stop(1, #FEFEFE)); /* Saf4+, Chrome */
filter: progid:DXImageTransform.Microsoft.gradient(startColorStr='#F6F6F6', EndColorStr='#FEFEFE'); /* IE6,IE7 */
-ms-filter: "progid:DXImageTransform.Microsoft.gradient(startColorStr='#F6F6F6', EndColorStr='#FEFEFE')"; /* IE8 */
* ..remove the gradient image
sry
Very nice tutorial..thanks..
Good job Man! Can i use this plugin?
Very well explained. good for people who want to use PDO.
Thanks
Well isn't that just a cute little animation. Nice!
Great, it works like we need it, keep it up (Y)
{applause]
I think I would want you on my team if I were to be putting together any sort of web-app for the masses.
Thanks once again for keen explanation and sweet integration. Not to mention all the ideas that are flying now about.
;)
Great! I love it, maybe will use it on my website.
Great tut
Toggle jquery with a bit of css3
Great little tutorial, and as mentioned by Goatie, it's very versatile!
And it's shown me just how easy it is to write a simple jQuery plugin
Thanks
Thank you for the awesome comments folks! Really appreciate it!
It's recommended that jQuery plugins take up a single function in the jQuery namespace, and that access to "internal methods" be provided, well...internally, via passing arguments to that one public function, rather than adding four separate functions to jQuery to manipulate the state of your plugin.
Note that it's #1 on this list of requirements provided as far back as 2007 by Mike Alsup:
Great plugin, i'm looking forward to using this.
Two words: Sliced. Awesome.
It is possible to auto-hide the notification after a few seconds?
Again..a very nice tutorial..
Cool, it looks impressive. But I have the same question as Bogdan: can the notification be hidden automatically after a few seconds?
what to say ?
Do we need a plugin for silly thing ?
yeah I got what to say , hahahaha :)
@Goatie , Great Idea though !!!
@ adam j. sontag
Thank you for sharing this useful information. I will follow these rules when releasing plugins in the future.
@ Bogdan, @ Deluxe Blog Tips
Yes, you can hide the box automatically. You can do it internally right in the plugin, by adding a timeout with which you schedule a call to bounceBoxHide().
You can download an autohide version of the plugin from here. Use it instead of the regular version of the plugin. You can find an example of its usage in script.js.
Thank you very much..works like a charm..
Nice! Thanks.
I'm new to jquery, so forgive the stupid question. How can i shoot the box from php? For example after verifying something on the page execution i launch the box from php. How can i do this?
Great Job!!!
it is possible to change the position "x" when you show the box?
Thank you very much...
@ Alexander
You can use PHP to write the jQuery code in a script tag on the page. Something like this:
@ edson
You can modify the position of the bounceBox as you would normally do any absolutely positioned div. For example you can modify the horizontal positioning like this:
However be sure to run this code after $('#box').bounceBox(); is run, as the plugin applies its own CSS properties to the bounceBox in order to center it.
Thank you very much…!!!!!!
I wonder, is it possible to combine this with jquery validation plugin?
wounderful tut - great script!
unfortunately I've to consider IE6 so I added some lines in the beginning of the css-part of your script to fix a IE6 bug:
var pos = 'fixed';
if (jQuery.browser.msie && parseInt(jQuery.browser.version) == 6){
pos = 'absolute';
}
this.css({
top : -this.outerHeight(),
marginLeft : -this.outerWidth()/2,
position : pos,
left : '50%'
});
thank you,
rolf
Hi, I have ported the plugin to the Ext Core library:
Beautiful! And useful.
How (and what) would I change to have this activated from a text link instead of "the big button."
Apologies if the answer is 'roll the eyes' simple.
The button is basically a regular link with a bit of styling applied. To open it from a specific link just change the selector on line 7 of script.js.
Currently:
Should be:
Martin, thank you. I really appreciate your reply. It tests out great with a link activation.
I have another two more questions. 1) How would you go about adding an extra bounce (or two)? 2) How do you add say a five-second time-out that closes the box (in addition to closing it with a click)?
Again, thanks. ~ Raubie
Oops. My apologies, you've already answered my second question by providing an autohide version. Thank you!
Great tutorial, is there any way to remove the bounce effect? i love it more without the bounce effect.
Yes, you can remove the bounce effect. Change line 29 of bouncebox-plugin/jquery.bouncebox.1.0.js from:
to:
Is it possible to have this show up after a few seconds and not on clicking the button? How?
you need to trigger the click for the button and set a timeout
Like this
setTimeout(function() {
$('a.button').trigger('click');
}, 4e3);
I tried using this with jQuery 1.8.0, but it looks like the box is not hidden properly now? - Do you know if there needs to be some changes for using the newest jQuery?
How do I get the black overlay like the one shown in zinescript link up top?
I have a question about this script, it's kewl by the way; how does one use it with multiple links? That is to say link one bounces content in id=box, link two bounces content in id=box2; or maybe they should be set by classes?
For example, one may wish to use it for a FAQ, whereby clicking on a particular question bounces a short answer - with a link to the FAQ page with more details. I can't figure out how to have it work with multiple ID's though.
Any help? | http://tutorialzine.com/2010/05/bounce-in-box-plugin-jquery/ | CC-MAIN-2015-06 | refinedweb | 2,243 | 65.73 |
I previously wrote about modeling RTS battles with IronPython. In this entry I'll explore a new policy for attacking that was suggested on the last thread.
Previously, I compared 2 policies for picking which opponent to attack:
1. Attack the weakest enemy.2. Attack a random enemy.
Each turn (eg, after each round of shooting each other), units reapplied their policy to pick a new target.
Timothy Fries asked what would happen if units kept their targets until they killed the target, rather than picking a new target each turn. I'll call that a "sticky" policy, which can serve as a modifier to another policy. So a Sticky-random (SR) policy is a policy that randomly picks a target and attacks it until it destroys it, and then randomly picks the next target.
It was very simple to adjust the python scripts for the new policy:
Our Unit class's PickTarget(self, army) function gets changed from:
def PickTarget(self, army):
guy = self.fpAttackPolicy(army)
assert(guy.IsAlive())
return guy
To:
def PickTarget(self, army):
if (self.stickyTarget and self.target and self.target.IsAlive()):
return self.target
# use policy to pick a target.
self.target = self.fpAttackPolicy(army)
assert(self.target.IsAlive())
return self.target
So now we remember the target in a member field and only pick a new target one the old target is dead. The sticky policy is only applied if stickyTarget is true.
And then we add some new convenient builder functions (in addition to Make(), and MakeR()) to easily create armies with the Sticky-Random policy:
# set the army policy to "sticky-attack" (don't pick a new target until it destroys the current one)
def Sticky(army):
for x in army:
x.stickyTarget = True
return army
def MakeSR(size):
return Sticky(MakeR(size))
Let's look at two equally sized armies applying a stick-random policy fighting each other. As with random, we'd expect the battles to average out as a tie.
This gets a vector of victory margins from 20 battles of 5-on-5. l1 is using an attack random policy (MakeR), l2 is using a sticky-random policy (MakeSR).
l1 = frun(20, lambda: Battle(MakeR(5), MakeR(5)).victory)
l2 = frun(20, lambda: Battle(MakeSR(5), MakeSR(5)).victory)
Recall that the victory margin in Battle(x,y) is between -100% (army x wins with no damage) to +100% (army y wins with no damage).
Last time, we saw that the random policy is basically a tie with an average victory margin of 0% and a standard deviation under 5.
Here's a python function to compute standard deviation:
# compute standard deviation against 0
def sd(l):
import math
return math.sqrt(sum([x*x for x in l])/len(l))
We can look at the average and standard deviation of each run:
>>> avg(l1)-1.2>>> avg(l2)3.75>>> sd(l1)4.69041575982>>> sd(l2)16.7630546142
>>> avg(l1)-1.2>>> avg(l2)3.75>>> sd(l1)4.69041575982>>> sd(l2)16.7630546142
In both cases, the averages come close to 0. However, sticky-random has a significantly larger standard deviation than random (16 vs. 4). In other words, overall stick-random averages out to a tie, but whoever does win, wins by a bigger margin than in an attack-random policy.
Quiz #1: Explain why sticky-random has a wide victory-margin spread than just random.
We can see how an army using Random policy fares vs. an army using the Stick-Random policy.
def RvsSR(nMax, nTimes=1):
return [ favg(nTimes, lambda : Battle(MakeR(i),MakeSR(i)).victory) for i in range(1,nMax+1)]
This is very similar to the matchups used last time, but we use MakeSR() instead of Make() or MakeR() to get an army of the appropriate policy.
Here's a chart showing the victory margin of all 3 matchups (R vs SR, R vs. W, SR vs w). In all cases of X vs Y, Y is the winning army and the graph shows Y's victory margin over X:
We can see that Attack-Weakest is the best policy. It beats the other two, and it even beats Random by a larger margin than Sticky-random does (the red line is consistently higher than the blue line).
So Sticky-random is an intermediate policy: it's better than just attack-random, but not as good as ganging up on the weakest.
A key takeaway was that with Python, it was very easy to both a) adjust the object model and b) run a wide variety of queries. (In fact, I found it harder to make the charts with Excel 2007 then I did to generate the data with Python)
We can objectively rank the attack policies from best to worst: | http://blogs.msdn.com/b/jmstall/archive/2008/01/12/battle-simulations-with-iron-python-part-2.aspx | CC-MAIN-2015-48 | refinedweb | 803 | 55.44 |
Investors eyeing a purchase of Autodesk Inc. (Symbol: ADSK) shares, but tentative about paying the going market price of $136.35/share, might benefit from considering selling puts among the alternative strategies at their disposal. One interesting put contract in particular, is the April put at the $136 strike, which has a bid at the time of this writing of $6.30. Collecting that bid as the premium represents a 4.6% return against the $136 commitment, or a 33.8% annualized rate of return (at Stock Options Channel we call this the YieldBoost ).
Selling a put does not give an investor access to ADSK's upside potential the way owning shares would, because the put seller only ends up owning shares in the scenario where the contract is exercised. And the person on the other side of the contract would only benefit from exercising at the $136 strike if doing so produced a better outcome than selling at the going market price. ( Do options carry counterparty risk? This and six other common options myths debunked ). So unless Autodesk Inc. sees its shares decline 0.2% and the contract is exercised (resulting in a cost basis of $129.70 per share before broker commissions, subtracting the $6.30 from $136), the only upside to the put seller is from collecting that premium for the 33.8% annualized rate of return.
Below is a chart showing the trailing twelve month trading history for Autodesk Inc., and highlighting in green where the $136 strike is located relative to that history:
The chart above, and the stock's historical volatility, can be a helpful guide in combination with fundamental analysis to judge whether selling the April put at the $136 strike for the 33.8% annualized rate of return represents good reward for the risks. We calculate the trailing twelve month volatility for Autodesk Inc. (considering the last 252 trading day closing values as well as today's price of $136.35) to be 34%. For other put options contract ideas at the various different available expirations, visit the ADSK Stock Options page of StockOptionsChannel.com.. | https://www.nasdaq.com/articles/commit-buy-autodesk-136-earn-338-annualized-using-options-2018-03-08 | CC-MAIN-2021-49 | refinedweb | 353 | 65.62 |
table of contents
NAME¶
frexp, frexpf, frexpl - convert floating-point number to fractional and integral components
SYNOPSIS¶
#include <math.h>
double frexp(double x, int *exp); float frexpf(float x, int *exp); long double frexpl(long double x, int *exp);
Link with -lm.
frexpf(), frexpl():
|| /* Since glibc 2.19: */ _DEFAULT_SOURCE
|| /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE
DESCRIPTION¶
These functions are used to split the number x into a normalized fraction and an exponent which is stored in exp.
RETURN VALUE¶
These functions return.
EXAMPLES¶
The program below produces results such as the following:
$ ./a.out 2560 frexp(2560, &e) = 0.625: 0.625 * 2^12 = 2560 $ ./a.out -4 frexp(-4, &e) = -0.5: -0.5 * 2^3 = -4
Program source¶
); }
SEE ALSO¶
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://manpages.debian.org/bullseye/manpages-dev/frexpl.3.en.html | CC-MAIN-2021-49 | refinedweb | 158 | 60.61 |
Geometric shape formed by extruding a 2D cross section along a 3D spine.
More...
#include <Inventor/nodes/SoExtrusion.h>
The SoExtrusion node specifies geometric shapes based on a two-dimensional cross section extruded along a three-dimensional spine. The cross section can be scaled and rotated at each spine point to produce a wide variety of shapes.
An SoExtrusion is defined by:
Shapes are constructed as follows - For each point in the spine, the cross-section curve, which is a curve in the XZ plane, is scaled about the origin by the corresponding scale parameter (first value scales in X, second value scales in Z), rotated about the origin by the corresponding orientation parameter and translated by the vector defined by the corresponding vertex of the spine curve. Each instance of the cross-section is then connected to the following instance.
The scaleMode field is used to select the points that will be scaled by the current transformation (for example SoTransform), if any. Translation and rotation are applied in all cases. The options are:
A transformed cross section is found for each joint (that is, at each vertex of the spine curve, where segments of the extrusion connect), and the joints and segments are connected to form the surface. No check is made for self-penetration. Each transformed cross section is determined as follows:
1. Start with the cross section as specified, in the XZ plane.
2. Scale it about (0, 0, 0) by the value for scale given for the current joint.
3. Apply a rotation so that when the cross section is placed at its proper location on the spine it will be oriented properly. Essentially, this means that the cross section's Y axis ( up vector coming out of the cross section) is rotated to align with an approximate tangent to the spine curve.
For all points other than the first or last: The tangent for spine[ i ] is found by normalizing the vector defined by (spine[ i +1] - spine[ i -1]).
If the spine curve is closed: The first and last points need to have the same tangent. This tangent is found as above, but using the points spine[0] for spine[ i ], spine[1] for spine[ i +1] and spine[ n -2] for spine[ i -1], where spine[ n -2] is the next to last point on the curve. The last point in the curve, spine[ n -1], is the same as the first, spine[0].
If the spine curve is not closed: The tangent used for the first point is just the direction from spine[0] to spine[1], and the tangent used for the last is the direction from spine[ n -2] to spine[ n -1].
In the simple case where the spine curve is flat in the XY plane, these rotations are all just rotations about the Z axis. In the more general case where the spine curve is any 3D curve, you need to find the destinations for all 3 of the local X, Y, and Z axes so you can completely specify the rotation. The Z axis is found by taking the cross product of:
(spine[ i -1] - spine[ i ]) and (spine[ i +1] - spine[ i ]).
If the three points are collinear then this value is zero, so take the value from the previous point. Once you have the Z axis (from the cross product) and the Y axis (from the approximate tangent), calculate the X axis as the cross product of the Y and Z axes.
4. Given the plane computed in step 3, apply the orientation to the cross-section relative to this new plane. Rotate it counterclockwise about the axis and by the angle specified in the orientation field at that joint.
5. Finally, the cross section is translated to the location of the spine point.
Surfaces of revolution: If the cross section is an approximation of a circle and the spine is straight, then the SoExtrusion is equivalent to a surface of revolution, where the scale parameters define the size of the cross section along the spine.
Cookie-cutter extrusions: If the scale is 1, 1 and the spine is straight, then the cross section acts like a cookie cutter, with the thickness of the cookie equal to the length of the spine.
Bend/twist/taper objects: These shapes are the result of using all fields. The spine curve bends the extruded shape defined by the cross section, the orientation parameters twist it around the spine, and the scale parameters taper it (by scaling about the spine).
SoExtrusion has three parts: the sides, the beginCap (the surface at the initial end of the spine) and the endCap (the surface at the final end of the spine). The caps have an associated SFBool field that indicates whether it exists (TRUE) or doesn't exist (FALSE).
When the beginCap or endCap fields are specified as TRUE, planar cap surfaces will be generated regardless of whether the crossSection is a closed curve. (If crossSection isn't a closed curve, the caps are generated as if it were -- equivalent to adding a final point to crossSection that's equal to the initial point. Note that an open surface can still have a cap, resulting (for a simple case) in a shape something like a soda can sliced in half vertically.) These surfaces are generated even if spine is also a closed curve. If a field value is FALSE, the corresponding cap is not generated.
SoExtrusion automatically generates its own normals. Orientation of the normals is determined by the vertex ordering of the quads generated by SoExtrusion. The vertex ordering is in turn determined by the crossSection curve. If the crossSection is counterclockwise when viewed from the +Y axis, then the polygons will have counterclockwise ordering when viewed from 'outside' of the shape (and vice versa for clockwise ordered crossSection curves).
Texture coordinates are automatically generated by extrusions. Textures are mapped so that the coordinates range in the U direction from 0 to 1 along the crossSection curve (with 0 corresponding to the first point in crossSection and 1 to the last) and in the V direction from 0 to 1 along the spine curve (again with 0 corresponding to the first listed spine point and 1 to the last). When crossSection is closed, the texture has a seam that follows the line traced by the crossSection's start/end point as it travels along the spine. If the endCap and/or beginCap exist, the crossSection curve is uniformly scaled and translated so that the largest dimension of the cross-section (X or Z) produces texture coordinates that range from 0.0 to 1.0. The beginCap and endCap textures' S and T directions correspond to the X and Z directions in which the crossSection coordinates are defined.
Also 3D texture coordinates are automatically generated, in a similar way to 2D textures.
NOTE: If your extrusion appears to twist unexpectedly, try setting environment variable OIV_EXTRUSION_EPSILON to a value slightly smaller number than the default, which is .998.
NOTE: If your crossSection is not convex, you must use a SoShapeHints and set the faceType field to UNKNOWN_FACE_TYPE.
Constructor.
Returns the type identifier for this class.
Reimplemented from SoBaseExtrusion.
Returns the type identifier for this specific instance.
Reimplemented from SoBaseExtrusion.
The shape that will be extruded, defined by a 2D piecewise linear curve in the XZ plane (described as a series of connected vertices).
Default is a square [ 1 1, 1 -1, -1 -1, -1 1, 1 1 ].
The cross-section curve is rotated by this value relative to a local reference system with origin at the current spine point and X / Z axes in the plane containing the cross-section curve.
If one value is specified it applies to every spine point, else there should be as many values as there are points in the spine. Default is no rotation.
The cross-section curve is scaled by this value on the X and Z axes.
If one value is specified it applies to every spine point, else there should be as many values as there are points in the spine. All scale values must be > 0. Default is (1,1) meaning no scaling. | https://developer.openinventor.com/refmans/latest/RefManCpp/class_so_extrusion.html | CC-MAIN-2021-04 | refinedweb | 1,371 | 67.18 |
Is it possible to insert a running clock in a Calc spreadsheet cell? Digital or Analog are both usable.
Yes in theory, you could make a threaded timer in Python with an interval of 1 second, then update the value in your Calc cell to the current time. ( i have not tried this myself yet ).
There is a way to do it without macros, that I did a time ago for a question in the Spanish AOo forum.
In a cell introduce =NOW(), give a name to it like ‘timer’.
Save the file.
Select the cell where to introduce the clock.
Create a Menu/Sheet/Link to external data, select as data source the own file.
Select the ‘timer’ in the ‘Available Tables/Ranges’
Mark Update interval and fix the seconds you like for update the clock.
Ok
Sample timer.ods|attachment
+1 awesome trick!
One disadvantage is that it stops working if the file is renamed, whereas a macro would not have this problem. However, avoiding macros is an advantage, so it’s a tradeoff. Anyway, I did not realize before that this solution can work even with reference to the same file.
As @librebel mentioned, one way to do this is with a threaded Python macro. From my answer at:
Assign
keep_recalculating_threadto the Open Document event.
import time from threading import Thread import uno def keep_recalculating_thread(action_event=None): t = Thread(target = keep_recalculating) t.start() def keep_recalculating(): oDoc = XSCRIPTCONTEXT.getDocument() while hasattr(oDoc, 'calculateAll'): oDoc.calculateAll() time.sleep(5) g_exportedScripts = keep_recalculating_thread,
Instead of
calculateAll(), I also tried updating a single cell containing
NOW() as recommended here. My hope was that only a single cell would need to be changed.
oCell.setFormula(oCell.getFormula() + " ")
This worked too, but it made all cells with
NOW() and
RAND() change anyway, so it does not seem to be any more efficient. See Recalculate - LibreOffice Help. | https://ask.libreoffice.org/t/is-it-possible-to-insert-a-running-clock-in-a-calc-spreadsheet-cell/29371/3 | CC-MAIN-2021-43 | refinedweb | 312 | 67.45 |
Opened 5 years ago
Closed 5 years ago
#5103 closed defect (fixed)
HDF5 file descriptor remains after closed
Description
It is suspicious that HDF5 files are not properly closed when
GDALClose() is called. A snippet below may reproduce the issue.
from osgeo import gdal ds = gdal.Open('test.h5') # at this point, `lsof | grep test.h5` should show an open descriptor. ds = None # at this point, `lsof | grep test.h5` should show nothing, but it persists.
Same test with other formats like tiff worked flawlessly. It doesn't look like Python-only, as multiple
gdal_translate calls from a process still showed a bunch of open descriptors, even after each command had been terminated.
It might be relevant to the delayed close described in HDF5 API, though I have no clue to address it.
Change History (3)
comment:1 Changed 5 years ago by
comment:2 Changed 5 years ago by
comment:3 Changed 5 years ago by
Note: See TracTickets for help on using tickets.
Fixed in trunk (r26061) and branches/1.10 (r26062) | http://trac.osgeo.org/gdal/ticket/5103 | CC-MAIN-2018-34 | refinedweb | 174 | 74.69 |
I have a task where I need to find the lowest Collatz sequence that contains more than 65 prime numbers in Python.
For example, the Collatz sequence for 19 is:
19, 58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1
This sequence contains 7 prime numbers.
I also need to use memoization so it doesn't have to run a "year" to find it. I found code for memoization of Collatz sequences, but I can't figure out how to get it to work when I need only the prime numbers.
Here is the Collatz memoization code that I found:
lookup = {} def countTerms(n): if n not in lookup: if n == 1: lookup[n] = 1 elif not n % 2: lookup[n] = countTerms(n / 2)[0] + 1 else: lookup[n] = countTerms(n*3 + 1)[0] + 1 return lookup[n], n
And here is my tester for prime:
def is_prime(a): for i in xrange(2,a): if a%i==0: #print a, " is not a prime number" return False if a==1: return False else: return True
Your existing code is incorrectly indented. I assume this task is a homework task, so I won't post a complete working solution, but I'll give you some helpful snippets.
First, here's a slightly more efficient primality tester. Rather than testing if all numbers less than
a are factors of
a, it just tests up to the square root of
a.
def is_prime(a): for i in xrange(2, int(1 + a ** 0.5)): if a % i == 0: return False return True
Note that this function returns
True for
a = 1. That's ok, since you don't need to test 1: you can pre-load it into the
lookup dict:
lookup = {1:0}
Your
countTerms function needs to be modified slightly so that it only adds one to the
lookup count when the current
n is prime. In Python,
False has a numeric value of 0 and
True has a numeric value of 1. That's very handy here:
def count_prime_terms(n): ''' Count the number of primes terms in a Collatz sequence ''' if n not in lookup: if n % 2: next_n = n * 3 + 1 else: next_n = n // 2 lookup[n] = count_prime_terms(next_n) + is_prime(n) return lookup[n]
I've changed the function name to be more Pythonic.
FWIW, the first Collatz sequence containing 65 or more primes actually contains 67 primes. Its seed number is over 1.8 million, and the highest number that requires primality testing when checking all sequences up to that seed is 151629574372. At completion, the
lookup dict contains 3920492 entries.
In response to James Mills's comments regarding recursion, I've written a non-recursive version, and to make it easy to see that the iterative and the recursive versions both produce the same results I'm posting a complete working program. I said above that I wasn't going to do that, but I figure that it should be ok to do so now, since spørreren has already written their program using the info I supplied in my original answer.
I fully agree that it's good to avoid recursion except in situations where it's appropriate to the problem domain (eg, tree traversal). Python discourages recursion - it cannot optimize tail-call recursion and it imposes a recursion depth limit (although that limit can be modified, if desired).
This Collatz sequence prime counting algorithm is naturally stated recursively, but it's not too hard to do iteratively - we just need a list to temporarily hold the sequence while the primality of all its members are being determined. True, this list takes up RAM, but it's (probably) much more efficient space-wise than the stack frame requirements that the recursive version needs.
The recursive version reaches a recursion depth of 343 when solving the problem in the OP. This is well within the default limit but it's still not good, and if you want to search for sequences containing much larger numbers of primes, you will hit that limit.
The iterative & recursive versions run at roughly the same speed (at least, they do so on my machine). To solve the problem stated in the OP they both take a little under 2 minutes. This is significantly faster than my original solution, mostly due to optimizations in primality testing.
The basic Collatz sequence generation step already needs to determine if a number is odd or even. Clearly, if we already know that a number is even then there's no need to test if it's a prime. :) And we can also eliminate tests for even factors in the
is_prime function. We can handle the fact that 2 is prime by simply loading the result for 2 into the
lookup cache.
On a related note, when searching for the first sequence containing a given number of primes we don't need to test any of the sequences that start at an even number. Even numbers (apart from 2) don't increase the prime count of a sequence, and since the first odd number in such a sequence will be lower than our current number its results will already be in the
lookup cache, assuming we start our search from 3. And if we don't start searching from 3 we just need to make sure our starting seed is low enough so that we don't accidentally miss the first sequence containing the desired number of primes. Adopting this strategy not only reduces the time needed, it also reduces the number of entries in the lookup` cache.
#!/usr/bin/env python ''' Find the 1st Collatz sequence containing a given number of prime terms From Written by PM 2Ring 2015.04.29 [Seed == 1805311, prime count == 67] ''' import sys def is_prime(a): ''' Test if odd `a` >= 3 is prime ''' for i in xrange(3, int(1 + a ** 0.5), 2): if not a % i: return 0 return 1 #Track the highest number generated so far; use a list # so we don't have to declare it as global... hi = [2] #Cache for sequence prime counts. The key is the sequence seed, # the value is the number of primes in that sequence. lookup = {1:0, 2:1} def count_prime_terms_iterative(n): ''' Count the number of primes terms in a Collatz sequence Iterative version ''' seq = [] while n not in lookup: if n > hi[0]: hi[0] = n if n % 2: seq.append((n, is_prime(n))) n = n * 3 + 1 else: seq.append((n, 0)) n = n // 2 count = lookup[n] for n, isprime in reversed(seq): count += isprime lookup[n] = count return count def count_prime_terms_recursive(n): ''' Count the number of primes terms in a Collatz sequence Recursive version ''' if n not in lookup: if n > hi[0]: hi[0] = n if n % 2: next_n = n * 3 + 1 isprime = is_prime(n) else: next_n = n // 2 isprime = 0 lookup[n] = count_prime_terms(next_n) + isprime return lookup[n] def find_seed(numprimes, start): ''' Find the seed of the 1st Collatz sequence containing `numprimes` primes, starting from odd seed `start` ''' i = start mcount = 0 print 'seed, prime count, highest term, dict size' while mcount < numprimes: count = count_prime_terms(i) if count > mcount: mcount = count print i, count, hi[0], len(lookup) i += 2 #count_prime_terms = count_prime_terms_recursive count_prime_terms = count_prime_terms_iterative def main(): if len(sys.argv) > 1: numprimes = int(sys.argv[1]) else: print 'Usage: %s numprimes [start]' % sys.argv[0] exit() start = int(sys.argv[2]) if len(sys.argv) > 2 else 3 #Round `start` up to the next odd number if start % 2 == 0: start += 1 find_seed(numprimes, start) if __name__ == '__main__': main()
When run with
$ ./CollatzPrimes.py 65
the output is
seed, prime count, highest term, dict size 3 3 16 8 7 6 52 18 19 7 160 35 27 25 9232 136 97 26 9232 230 171 28 9232 354 231 29 9232 459 487 30 39364 933 763 32 250504 1626 1071 36 250504 2197 4011 37 1276936 8009 6171 43 8153620 12297 10971 44 27114424 21969 17647 48 27114424 35232 47059 50 121012864 94058 99151 51 1570824736 198927 117511 52 2482111348 235686 202471 53 17202377752 405273 260847 55 17202377752 522704 481959 59 24648077896 966011 963919 61 56991483520 1929199 1564063 62 151629574372 3136009 1805311 67 151629574372 3619607 | http://databasefaq.com/index.php/answer/137613/python-sequence-primes-collatz-memorization-finding-the-lowest-collatz-sequence-that-gives-more-that-65-primes | CC-MAIN-2019-13 | refinedweb | 1,385 | 70.77 |
In the last article in this series, I talked about the difference between querying an RDBMS and querying an object database like db4o. As I showed, db4o offers quite a few more ways to query than your typical relational database can, giving you a range of options for dealing with various application scenarios.
This time, I'll continue that theme -- the many options found in db4o -- with a look at how it handles refactoring. As of version 6.1, db4o automatically recognizes and handles three different kinds of refactoring: adding a field, removing a field, and adding a new interface to a class. I won't cover all three (I'll focus on adding a field and changing a class name), but I will introduce you to what's most exciting about refactoring with db4o -- which is its introduction of backward- and forward-compatibility to database change management.
As you'll see, db4o's ability to silently roll with updates and ensure consistency from code to disk takes a lot of the stress out of refactoring the persistence portion of your system. That same flexibility also makes db4o a good candidate for inclusion in a test-driven development process.
Refactoring in the real world
Last month, I talked about querying db4o using both native and QBE-style queries. In that discussion, I suggested readers running the example code should delete the existing database file containing the results from previous runs. This was to avoid "weird" results stemming from the fact that the OODBMS notion of identity isn't the same as that found in relational theory.
That workaround suited my example, but it poses an interesting question in real life. What happens to an OODBMS when the code that defines the objects it stores changes? In an RDBMS, the line between "storage" and "object" is supposedly pretty clear: The RDBMS obeys a relational schema defined by DDL statements executed at some point prior to working with the database. The Java code then either uses handwritten JDBC processing code to map the results of the query into the Java objects, or else the mapping is done "automatically" via a library like Hibernate or the new Java Persistence API (JPA). Either way, the mapping is explicit and has to be modified every time a refactoring takes place.
In theory, there is no difference between theory and practice. But that's true only in theory. Refactoring a relational database and object/relational mapping files should be simple. But in real life, RDBMS refactoring is only clear if the refactoring is purely a Java-code-level issue; in that case, simply changing the mapping is enough to complete the refactoring. If the change is to the relational storage of the data itself, however, then suddenly you're in a whole new world of complexity, so much so that an entire book has been written on the subject. (A book once described by one of my colleagues as "500 pages of database tables, triggers, and views.") Suffice it to say that because a real-world RDBMS frequently contains data that needs to be preserved, just dropping the schema and rebuilding it from DDL statements is not an option.
So now we know what happens to an RDBMS when the Java code defining its objects changes. (Or at least we know what happens to the RDBMS manager, which is a great big headache.) Now let's find out what happens in a db4o database when the code changes.
Setting up the database
If you've read the previous two articles in this series, then my
admittedly primitive database is familiar to you. It currently consists
of one type, the
Person type, whose definition
appears in Listing 1:
Listing 1. Person; }
Next, I populate the database, as shown in Listing 2:
Listing 2. Database at 't0'
import java.io.*; import java.lang.reflect.*; import com.db4o.*; import com.tedneward.model.*; // Version 1 public class BuildV1 { public static void main(String[] args) throws Exception { new File(".", "persons.data").delete(); ObjectContainer db = null; try { db = Db4o.openFile("persons.data"); Person brianG = new Person("Brian", "Goetz", 39); Person jason = new Person("Jason", "Hunter", 35); Person brianS = new Person("Brian", "Sletten", 38); Person david = new Person("David", "Geary", 55); Person glenn = new Person("Glenn", "Vanderberg", 40); Person neal = new Person("Neal", "Ford", 39); Person clinton = new Person("Clinton", "Begin", 19); db.set(brianG); db.set(jason); db.set(brianS); db.set(david); db.set(glenn); db.set(neal); db.set(clinton); db.commit(); // Find all the Brians ObjectSet brians = db.get(new Person("Brian", null, 0)); while (brians.hasNext()) System.out.println(brians.next()); } finally { if (db != null) db.close(); } } }
Notice that I explicitly deleted the file "persons.data" at the beginning
of the code snip in Listing 2. Doing this ensures a clean slate getting
started. In future versions of the Build application, I'll leave the
persons.data file alone to demonstrate the refactoring process. Also
note that the
Person type will change (this will
be the focus of my refactorings), so be sure to
familiarize yourself with the version being stored and/or fetched for each
example. (Look for comments in each version of
Person in the source code for this article, as well as Person.java.svn files in the source tree
of the code. These will make the examples easier to follow.)
Refactor me once!
Up until now, things around the old shop have being going pretty well. The
company database is full of
Persons that can be
queried, stored, and used anytime anyone wants them, and basically everyone
is happy. But Upper Management has just read The latest best-selling upper
management book, called People have feelings too!, and they have
decided the database needs to be modified to include the
Person's mood.
In a traditional object/relational scenario, this implies two major
undertakings: refactoring the code (which I'll discuss below) and
refactoring the database schema to include the new data reflecting
Persons' mood. Now, Scott Ambler has produced some
great resources for RDBMS refactoring (see Resources), but nothing changes the fact that
refactoring a relational database is far more complicated than refactoring
Java code, particularly when you have to preserve existing production
data.
Things are much simpler in an OODBMS, however, because the refactoring takes place entirely in the code, in this case, Java code. It's important to remember that in an OODBMS, the code is the schema. As a result, an OODBMS presents a "single source of truth," so to speak, as opposed to the O/R world where that truth (so called) is encoded in two different places: the database schema and the object model. (Which one "wins" in the event of a conflict is the subject of much debate and angst amongst Java developers.)
Refactoring the database schema
My first step is to create a new type that defines all the moods to track. This is easily done using a Java 5 enumeration type, as shown in Listing 3:
Listing 3. Howyadoin'?
package com.tedneward.model; public enum Mood { HAPPY, CONTENT, BLAH, CRANKY, DEPRESSED, PSYCHOTIC, WRITING_AN_ARTICLE }
Second, I need to change the
Person code by
adding a field and the appropriate property methods to track mood, as shown
in Listing 4:
Listing 4. No, howYOUdoin'?
package com.tedneward.model; // Person v2 public class Person { // ... as before, with appropriate modifications to public constructor and // toString() method public Mood getMood() { return mood; } public void setMood(Mood value) { mood = value; } private Mood mood; }
Checking in with db4o
Before I do anything else, let's see how db4o would respond to a
query that looked for all the
Brians in the
database right now. In other words, how will db4o react if I run an
existing
Person-based query against the database
when no
Mood instances are stored (shown in Listing 5)?
Listing 5. How's everybody doing?
import com.db4o.*; import com.tedneward.model.*; // Version 2 public class ReadV2 { public static void main(String[] args) throws Exception { // Note the absence of the File.delete() call ObjectContainer db = null; try { db = Db4o.openFile("persons.data"); // Find all the Brians ObjectSet brians = db.get(new Person("Brian", null, 0, null)); while (brians.hasNext()) System.out.println(brians.next()); } finally { if (db != null) db.close(); } } }
The results are somewhat startling in their passivity, as shown in Listing 6:
Listing 6. db4o takes it in stride
[Person: firstName = Brian lastName = Sletten age = 38 mood = null] [Person: firstName = Brian lastName = Goetz age = 39 mood = null]
Not only did db4o not choke on the fact that the two definitions
of
Person (the one on disk and the one in code)
weren't identical, it went one step further: it looked at the data on disk,
determined that the
Person instances there didn't
have a mood field, and silently substituted in the default value of
null. (Which, by the way, is exactly what the Java Object
Serialization API would do in the same situation.)
The most important thing here is that db4o silently handled the mismatch between what it saw on the disk and in the type definition. This turns out to be a pretty consistent theme throughout the db4o refactoring story: Wherever possible, db4o silently deals with version mismatches. It either expands the elements on disk to include added fields, or, if the fields don't exist in the class definition it is working with in the given JVM, it ignores them.
Code-to-disk compatibility
This idea that db4o somehow adjusts as necessary to missing or extraneous fields on disk deserves exploration, so let's see what happens when I update the data on disk to include mood, as shown in Listing 7:
Listing 7. We're alright
import com.db4o.*; import com.tedneward.model.*; // Version 2 public class BuildV2 { public static void main(String[] args) throws Exception { ObjectContainer db = null; try { db = Db4o.openFile("persons.data"); // Find all the Persons, and give them moods ObjectSet people = db.get(Person.class); while (people.hasNext()) { Person person = (Person)people.next(); System.out.print("Setting " + person.getFirstName() + "'s mood "); int moodVal = (int)(Math.random() * Mood.values().length); person.setMood(Mood.values()[moodVal]); System.out.println("to " + person.getMood()); db.set(person); } db.commit(); } finally { if (db != null) db.close(); } } }
In Listing 7, I've found all the
Persons in the database and randomly assigned them
Moods. In a more real-world application, I would
likely be working with a baseline set of data rather than a randomly chosen
one, but this works for the example. Running the code produces the output shown in Listing
8:
Listing 8. How's everybody feeling today?
Setting Brian's mood to BLAH Setting David's mood to WRITING_AN_ARTICLE Setting Brian's mood to CONTENT Setting Jason's mood to PSYCHOTIC Setting Glenn's mood to BLAH Setting Neal's mood to HAPPY Setting Clinton's mood to DEPRESSED
You can verify this output by running ReadV2
again. Better yet, you could run the original query version, ReadV1 (which looks just like ReadV2 except that it
was compiled against the V1 version of
Person).
When you do so, it produces the following:
Listing 9. The old version of 'How's everybody feeling today?'
[Person: firstName = Brian lastName = Sletten age = 38] [Person: firstName = Brian lastName = Goetz age = 39]
What's remarkable about the output in Listing 9 is that it's no different
from what db4o spit back before I added the
Mood
extension to the
Person class (in Listing 6) -- which means db4o is both backward- and
forward-compatible.
Refactor me twice!
Suppose you want to change the type of a field in an existing class, for
example by changing
Person's age from an integer
type to a short type? (People don't generally live beyond 32,000 years,
after all -- and I think it's safe to suggest that if that does ever
become a concern, you'll be able to refactor the code back to an integer
field.) Assuming the two types are similar in nature, such as int-to-short
or float-to-double, db4o just silently rolls with the changes -- once
again more or less emulating the Java Object Serialization API. The downside
of this sort of operation is that db4o could accidentally truncate a value.
This would only happen if the value were a "narrowing conversion," where a
value exceeded the possible range value allowed in the new type, such as
trying to convert from long to int, for example. Caveat emptor -- and
be sure to unit-test thoroughly during development or prototyping.
Actually, db4o's knack for backward-compatibility deserves a bit more explanation. Basically, when db4o sees the field of the new type, it creates a new field on disk with the same name but a new type, just as if it were any other new field added to the class. This also means that the old values are still present in the field of the old type. So, once again, you can always "call back" old values by refactoring the field to the original value, which can either be viewed as a feature or a bug, depending on your point of view at the time.
Note that method changes to the class are irrelevant to db4o because it doesn't store methods or method implementations as part of the stored object data, and ditto for constructor refactorings. Only fields and the name of the class itself (discussed next) are of any importance to db4o.
Third time's ... tricky
In some cases, the refactoring that needs to happen is a bit more
drastic, such as changing the name of a class entirely (meaning either the
class name or the package it lives in). Something like this is a drastic
change to db4o because it stores objects in a manner that keys off of
classname. When db4o is looking for instances of
Person, for example, it looks in specific areas at
blocks that are tagged with the name
com.tedneward.model.Person. So, changing the name
effectively puts db4o in an untenable situation: it can't magically infer
that
com.tedneward.model.Person is now
com.tedneward.persons.model.Individual. Fortunately,
there are a couple of ways to teach db4o how to manage the transition.
Changing the names on disk
One way you can ease db4o into such a dramatic change is to write a refactoring tool of your own, using the db4o Refactoring API to open the existing data file and change the name on disk. You can do this with a pretty simple set of calls, as shown in Listing 10:
Listing 10. Refactoring from Person to Individual
import com.db4o.*; import com.db4o.config.*; // ... Db4o.configure().objectClass("com.tedneward.model.Person") .rename("com.tedneward.persons.model.Individual");
Notice that the code in Listing 10 uses the db4o Configuration API to get
hold of a configuration object, which in turn is used as a sort of
"meta-control" over most of db4o's options -- you will use this API rather than command-line flags or configuration files to set particular settings at
run time. (Though there's nothing stopping you from creating your
own command-line flags or configuration files to drive Configuration API
calls.) The
Configuration object is then used to
obtain the
ObjectClass instance for the
Person class ... or, to be more precise, the
ObjectClass instance representing the stored
Person instances on disk.
ObjectClass contains a number of other options as well,
some of which I'll show you later in the series.
Using an alias
In some cases, the data on disk has to remain in place to support earlier applications that cannot be recompiled for whatever reasons, technical or political. In these cases, the V2 application has to somehow accommodate pulling V1 instances in and turn them into V2 instances in memory. Fortunately, you can rely on db4o's alias feature to create a shuffle step while storing and retrieving objects to/from disk. This allows you to vary the types stored from the types used in memory.
db4o supports three different kinds of aliases, one of which is only
useful when sharing data files between the .NET and Java flavors of db4o.
The alias at work in Listing 11 is
TypeAlias, which
effectively tells db4o to swap out an "A" type in memory (the runtime
name) for a "B" type on disk (the stored name). Enabling this is
a two-line operation.
Listing 11. The TypeAlias shuffle
import com.db4o.config.*; // ... TypeAlias fromPersonToIndividual = new TypeAlias("com.tedneward.model.Person", "com.tedneward.persons.model.Individual"); Db4o.configure().addAlias(fromPersonToIndividual);
When run, db4o will now recognize any call to query
Individual objects from the database as a request to instead
look across stored
Person instances; this means
that the
Individual class should have fields of a
similar name and type to those stored in
Person,
which db4o will map appropriately.
Individual instances will then be stored under the
Person name.
In conclusion
Every refactoring example in this article was made much simpler by the fact that the schema in an object database is the class definition itself, not a stand-alone DDL definition in a different language. Refactoring in db4o is an exercise in code, which can often be established easily through a configuration call, or at worst by writing and running a conversion utility to upgrade existing instances from the old type to the new one. This type of conversion is necessary for almost all RDBMS refactorings in production as well.
db4o's powerful refactoring capability makes it useful during development, when the rich domain objects being designed are still undergoing a lot of churn and you are refactoring on a daily, if not hourly, basis. Using db4o for unit testing and test-driven development can save you a great deal of time mucking around in the database, particularly if the refactorings are simple field addition/removal or type/name changes.
That's all for now, but remember this: If you're going to write with objects, and persistence truly is "just an implementation issue," then why would you look to flatten perfectly good objects into flat squares if you don't have to?
Download
Resources
Learn
- "The busy Java developer's guide to db4o: Introduction and overview" (Ted Neward, developerWorks, March 2007): Introduces db4o and explains why it has become an important alternative to today's relational databases.
- "The busy Java developer's guide to db4o: Queries, updates, and identity" (Ted Neward, developerWorks, March 2007): Explores db4o's various mechanisms for finding and retrieving data.
- Refactoring Databases: Evolutionary Database Design (Scott Ambler and Pramod J. Sadalage; Addison-Wesley Signature Series, 2006): A 500-page tome on database refactoring.
- Book review -- Refactoring Databases: Evolutionary Database Design (Eric Naiburg, developerWorks, September 2006): A positive review from the Rational Edge.
- The db4o home page: Learn more about db4o.
- New to IBM Information Management: Still not sold on OODBMS? Get more information about IBM's powerful family of relational database management system (RDBMS) servers.
- ODBMS.org: An excellent collection of free material on object database technology.
- The developerWorks Java technology zone: Hundreds of articles about every aspect of Java programming.
Get products and technologies
- Download db4o: An open source native Java programming and .NET database.. | http://www.ibm.com/developerworks/java/library/j-db4o3/index.html | CC-MAIN-2014-52 | refinedweb | 3,213 | 50.87 |
Passing data into a new conversation (in JSF)Bisser Peshev Mar 2, 2010 4:19 PM
I was wondering about the best way to pass data into a newly-created long-running conversation without polluting the conversation with unnecessary beans.
A very simplistic scenario would be: I am working with a JSF page (Page1) whose backing bean is @ConversationScoped for some reason (for example, it must be able to survive a post-redirect-get operation). From that page, I wish to navigate to another page (Page2) which must start a new long-running conversation. And I wish to pass data (say, customer name) from the first page into the second page. But I don't want the first page's backing bean to be part of the newly created conversation.
I don't know the best way -- I'm asking about it here. But I can think of two ways to pass data:
One way would be to pass the data REST-style (as request parameters):
page1.xhtml:
<h:inputText<br/> <h:commandButton
Page1.java:
@ConversationScoped @Named public class Page1 implements Serializable { public String toPage2() { return "page2?faces-redirect=true&custName=" + custName; } ...
page2.xhtml:
<f:metadata> <f:viewParam </f:metadata> ...
Page2.java:
@ConversationScoped @Named public class Page2 implements Serializable { @Inject Conversation conversation; @PostConstruct public void init() { conversation.begin(); } ...
I have no idea if I'm doing it right, but this works.
However, I'm interested in the other way -- passing data via server-side state instead of passing it as request parameters:
Page1.java:
@ConversationScoped @Named public class Page1 implements Serializable { @Inject Page2 page2; public String toPage2() { page2.setCustName(custName); return "page2"; } ...
Here's my real problem: when using this method, and if Page2.java starts a long-running conversation (see the Page2 snippet further above that starts the conversation), then Page1.java will be placed in the long-running conversation too. I don't want it there. I want ONLY Page2.java in the conversation. How can I do that? I understand it's impossible to remove beans from a context, but I don't quite understand why? Why shouldn't I be allowed to remove Page1 from the conversation? I don't need it there. Or... what can I do, so that Page1 doesn't get included into the conversation in the first place?
If you reached this point, thank you for your patience! I just want to learn some 'best practices'.
1. Re: Passing data into a new conversation (in JSF)Francisco Jose Peredo Noguez Mar 2, 2010 4:51 PM (in response to Bisser Peshev)
I would also love to know the answer to this
2. Re: Passing data into a new conversation (in JSF)Francisco Jose Peredo Noguez Mar 2, 2010 4:53 PM (in response to Bisser Peshev)
Maybe what is needed is to have an special scope type for information that is going to be sent from one conversation into another?
3. Re: Passing data into a new conversation (in JSF)Francisco Jose Peredo Noguez Mar 2, 2010 4:55 PM (in response to Bisser Peshev)
Or some new kind of annotation/interceptor? for the method that makes the transition to the other conversation? something like @ConversationTransition
@ConversationScoped @Named public class Page1 implements Serializable { @Inject Page2 page2; @ConversationTransition public String toPage2() { page2.setCustName(custName); return "page2"; }
4. Re: Passing data into a new conversation (in JSF)Matthieu Chase Heimer Mar 2, 2010 7:45 PM (in response to Bisser Peshev)
You want the Page1 bean and the Page2 bean to have different lifetimes so you shouldn't be putting them in the same scope. Having Page1 be responsible for copying it's data into the (not yet in existence) Page2 bean is not something you want to do.
Sounds like you need the flash scope from JSF. You could put the Page1 bean in flash scope and have the Page2 bean in conversation scope. The Page2 bean would read the state out of Page1 via injection before Page1 went out of scope.
Maybe Seam3 will have a flash scope, I think there was some discussion about it.
The other answer is; Put the information that you need to outlast the 1st conversation in a scope that out lasts the conversation. Putting the customer name in the Session scope comes to mind.
5. Re: Passing data into a new conversation (in JSF)Bisser Peshev Mar 2, 2010 8:27 PM (in response to Bisser Peshev)
Thank you for your reply!
Cay Horstmann is not particularly enthusiastic about the Flash scope:
He's a famous guy and he co-authored
Core JavaServer Faces. I suppose he knows what he's talking about but I'm beginning to learn the stuff now, so I can't be sure of anything. In time, I will develop my own views on the various matters. For example, I'm not super-enthusiastic about CDI's limitations (especially it's insistence to control everything, it's lack of support for PhaseListeners, and its lack of APIs to manipulate the contexts programmatically or to request a bean from a context programmatically), but I might be wrong. I will learn and see...
Regarding our scenario -- I need Page1 to be in temporary @ConversationScope. (Temporary conversations are actually quite similar to the Flash scope -- they survive a redirect.) Of course, it would be perfect if Page1 was @RequestScoped -- this would mean that it would not be included in the newly-created long-running conversation, but instead would be discarded at the end of the request. Unfortunately, the scenario stipulates that this is not an option, because I need Page1 to be able to survive a redirect. @RequestScoped beans don't survive redirects. We need @ConversationScope.
To me, it would have been great to have some API that would allow me to remove a bean from a context. In fact, a rich API would have suited me perfectly. Alas, no luck. We need to make do with what we have (or use some other DI framework?). And since I would love to know the 'best practice' for our scenario, I asked in the forum. :(
It seems to me that it's better for Page1 to inject Page2, instead of the other way around. The reason, IMHO, is that Page2 might be the entry point for some common wizard or something. It's not supposed to know that Page1 (or any other page) exists. Instead, the clients of the wizard need to pass some data into the wizard. At least, that's what I think is more appropriate.
6. Re: Passing data into a new conversation (in JSF)Ian R Mar 3, 2010 3:59 AM (in response to Bisser Peshev)
I think I know what you mean, but maybe a different line of thought might clarify things (or blow my ideas out of the water!).
I tend not to use PRG if I'm redirecting to the same page. Usually this is the case when form validation fails (basic and extended/custom validation). I would simply be not using PRG until you are ready to go away from that page and all data submitted is valid. Once you are ready to do that, then grab Page2 and insert the values there and go from there.
Obviously you've simplified the context, but maybe Page2 is actually part of the same conversation as Page1 and hence should be treated that way. Hard to tell. Or Page1 doesn't need to be a conversation scoped thing - or at least a different conversation.
Hope that makes some sense.
Ian
7. Re: Passing data into a new conversation (in JSF)Gavin King Mar 3, 2010 6:28 AM (in response to Bisser Peshev)
Bisser, I recommend that you try working with CDI not against it. Suspend disbelief for a bit and learn the CDI Way. Don't try to do things the way you're used to doing them in other frameworks. The
limitationsyou're describing are there by design. We thought very carefully about them, and we think they are the Right Thing.
(P.S. it's very easy to obtain a bean programmatically.)
8. Re: Passing data into a new conversation (in JSF)Bisser Peshev Mar 3, 2010 5:17 PM (in response to Bisser Peshev)
Thank you, guys!
@Ian
I didn't mean to use PRG for simple postbacks. I meant that Page1 needed it for some other purpose -- say, to receive some data from Page0. Whatever the reason, we imagine that Page1 is standalone and @ConversationScoped and Page2 is the beginning of a wizard. When the wizard starts, a long-running conversation starts. I don't want Page1, which is not part of the wizard, to be in the conversation context.
@Gavin
Could you, please, tell me the recommended way to start a long-running conversation without automatically inheriting everything from the current temporary conversation? In other words, using the scenario I described above, to start a conversation that contains only Page2, but not Page1. (Assuming that both Page1 and Page2 are @ConversationScoped.)
As for
obtaining a bean programmatically, I meant a different thing. I already read the Weld reference but I couldn't find what I need there. Here's a detailed explanation:
1. I'm in a PhaseListener. (Of course, it's NOT instantiated by Weld/CDI (!), because PhaseListeners are not supported by CDI, so there's no dependency injection.)
2. The PhaseListener needs a bean from the Session Context. (For example, to perform some authentication activity.)
How can the PhaseListener get that bean from the session? I know how to do it without CDI:
externalContext.getSessionMap().get(
myBean);
(Of course, I would not be using a String literal directly. I would be using either a static final constant, or, better yet, a static method in MyBean that would know how to obtain the bean.)
How can I force CDI to give me the bean from the Session context?
Thank you all for reading and responding!
9. Re: Passing data into a new conversation (in JSF)Ian R Mar 4, 2010 1:37 AM (in response to Bisser Peshev)
Ok, I thought it wouldn't be that simple! Thought I'd say it anyway just it case.
Maybe you need to end one conversation and start another one. That's one reason for not starting the conversation as you construct the bean - you lose some flexibility. You can start the conversation when you call one of the methods. You could end the conversation on Page1, put the data into Page2 then start a new conversation on Page2 (possibly with the same method that sets the data).
Without stepping on Gavin's toes, I thought that a PhaseListener could be injected into. The PhaseListener can't be a managed bean itself (you can't inject it anywhere else), but it can be injected into - the container would manage that. If that is true, inject yourself a BeanManager or even the bean directly and do it that way. The BeanManager is just a JNDI resource, so I can't see why not. I'm sure somebody will correct me if I'm wrong.
Ian
10. Re: Passing data into a new conversation (in JSF)Nicklas Karlsson Mar 4, 2010 8:25 AM (in response to Bisser Peshev)
I recall filing a JIRA for the Phase Listener injection. It's more of a platform / JSF decision anyway but I agree that injection should be supported wherever possible
11. Re: Passing data into a new conversation (in JSF)Bisser Peshev Mar 4, 2010 1:29 PM (in response to Bisser Peshev)
@Ian:
I cannot end a temporary conversation. I'd get an exception, if I tried. But when I start a long-running conversation, everything that's already in the temporary conversation becomes part of the long-running conversation. This includes both Page1 and Page2. I don't want Page1 to be part of the long-running conversation.
(By the way, I'm sorry about being ambiguous as to when I start the long-running conversation. In my second example, where I try to transfer state via injection and not via request parameters, I don't begin() the conversation in the @PostConstruct method. That wouldn't be practical and may lead to exceptions if I try to begin() a conversation that's already long-running.)
PhaseListeners cannot be injected into. That's quite unfortunate because I have always been using a PhaseListener to verify a user's state and to redirect to the login page in case of session expiration. (I don't use Filters but PhaseListeners, because quite often I need to redirect while processing an Ajax request.)
@Nicklas
PhaseListeners are not my real problem. They were just an example.
The problem is what to do if I'm inside a manually-created (or, rather, not-created-by-CDI) bean and I want to hook into CDI. The PhaseListener example just illustrates how easy we can get into such a situation.
I wish to be able to work with instances that were not created by CDI, and from time to time to ask CDI to do some injection for me. This is quite possible with other DI frameworks.
For example, we now have a gigantic application that does not use CDI. Imagine that we wish to migrate to CDI but that migration cannot be done all at once. Instead, we wish to migrate incrementally. This means that initially we wish to hook into CDI at a couple of places and to get dependency injection there. The rest of the application remains the same. In time, we may increase our CDI usage and do more injection at more places. But it is quite likely that we will never need to migrate 100% of our application into CDI. We only wish some injection at some places. Is that possible with CDI? My (bad) experience with PhaseListeners seems to indicate that it's not possible. If I wish to use CDI, it must take control over my whole application. Frankly, that's something that scares me. I wish to retain control over my own application and only use CDI at certain places where I need it.
Again, thank you all for taking the time to reply. I truly wish to like CDI and, if I learn the right way to use it, as well as how to do certain things, I may begin to like it. (I already like the fact that I can inject session-scoped beans containing data about the logged-in user into EJBs -- this opens up quite a few possibilities beyond the basic role-based authorization.) However, right now I'm afraid of CDI and its desire to dominate my application.
12. Re: Passing data into a new conversation (in JSF)Bisser Peshev Mar 4, 2010 5:22 PM (in response to Bisser Peshev)
@Ian:
Hey, Ian, in addition to my comment in the post above, I decided to give the BeanManager thing a try, even though it's an SPI class and I've been reluctant to try it until now. Here's something that works -- it can be used to manually lookup beans from inside a PhaseListener:
public class CdiUtils { public static BeanManager getBeanManager() { try { return (BeanManager) new InitialContext().lookup("java:comp/BeanManager");//"java:app/BeanManager"); } catch (NamingException e) { throw new BeanManagerNotFoundException("Couldn't locate the CDI BeanManager", e); } } public static <S extends Annotation,T> T getBean(Class<S> scopeType, Class<T> beanType, Annotation... qualifiers) { final BeanManager bm = getBeanManager(); final Context ctx = bm.getContext(scopeType); final Set<Bean<?>> beans = bm.getBeans(beanType, qualifiers); for (Bean<?> bean : beans) { return (T)ctx.get(bean); } return null; } }
NOTE: I have removed most of the error-handling code (for the sake of simplicity). Resolution errors, and all other errors, should be handled accordingly.
It can be used like this:
UserState userState = CdiUtils.getBean(SessionScoped.class, UserState.class);
I don't know if what I'm doing is legal or correct. I didn't even try to optimize it or to see if there's a better/easier way. I just gave it a try, that's all.
P.S. I know such code is not the epitome of testability, but I'm currently only interested in getting the work done. Beautiful testability will be left to the reader.
13. Re: Passing data into a new conversation (in JSF)Ian R Mar 5, 2010 1:10 AM (in response to Bisser Peshev)
Funny you should mention it - I have an
InjectionUtilitiesclass class to do similar things. But I have a slightly different approach - not necessarily the cleanest, but it does certain things for me.
I have a ServletContextListener, and I inject my InjectionUtilities class into it. I inject my BeanManager into my InjectionUtilities class, so I don't need to look it up via JNDI (I have managed to avoid JNDI altogther in this project). But my InjectionUtilities class has a private constructor, and a @Singleton stereotype. Once I have instantiated it, I only have static methods on it. The main one - getBeanManager(). That way, every class can get a BeanManager without needing to look it up via JNDI and you know it is available once the application has started up.
The other methods on this class - injectClass() (manually forces the injection of the object passed into it) and createManagedClass() (which creates it via CDI and then does the injection). Damn useful and when I start instantiating classes rather than the container doing it, I have injection available. And it is needed - for example, I have a JSF error handler that isn't container managed. I liked @Injecting so much people might think I'm a drug addict!
14. Re: Passing data into a new conversation (in JSF)Bisser Peshev Mar 5, 2010 3:24 PM (in response to Bisser Peshev)
Well, I suppose that if you want the BeanManager injected, and if you do not want to depend on things like ServletContextListener, you might do something like this:
public class CdiUtils implements Extension { public void init(@Observes AfterBeanDiscovery event, BeanManager beanMgr) { ... some initialization & setup here ... } ...
It worked for me -- it injected the BeanManager.
Of course, at the moment I cannot be sure which method is a 'best practice', and which one is not. I'm just mentioning it as an option. | https://developer.jboss.org/thread/179593 | CC-MAIN-2018-39 | refinedweb | 3,055 | 63.9 |
Deploy GitLab CE on a new Azure Kubernetes cluster
I would like to share my experience to create a small Kubernetes cluster on Azure Container Service (AKS Preview) and deploy GitLab CE on it using the Helm chart.
Creating a cluster in AKS should be an easy task but sometimes things don’t go at they suppose to. There are at least two main issues that I found:
- When using the Azure Portal, there is a big chance something will go wrong and the cluster will be always in “creating” state and you can only remove it by using the Azure CLI.
- There is no way from Azure CLI to understand, which VM sizes are available in which region. Definitely, not all VM sizes can be used in AKS and in different regions the set of available VM sizes is different. It is OK when someone else pays for your Azure subscription, but using your own freemium subscription or paying out of own pocket might end up not so well.
- You need a cluster with at least three nodes, since smaller VMs have limitations on how many data volumes can be attached to them and GitLab uses quite a few persistent volumes.
So, I managed to install the cluster with three nodes of the recent VMs, which are still quite cheap — the relatively new B-series machine size. These machines, according to Microsoft, are suited best for irregular CPU load, like web sites, development or test environments. This suits my purpose ideally.
I used B2s machine size with 2 vCPUs, 8 Gb RAM and 8 Gb local SSD.
You need Azure CLI if you want to repeat what I have done. If you don’t have it, the installation is described on this page. You also need kubectl, the management command like tool for Kubernetes.
After logging in to the Azure account, I had to register a new resource group, because AKS requires a separate resource group. So do this, I executed this command:
az group create --name ubiquitous --location eastus
Although I am not from the US, I chose this region because it has most VM sizes available. As you can imagine,
ubiquitous is my own resource group name and you should be using something else.
Next, the cluster creation. Doing it using Azure CLI is rather simple.
az aks create --resource-group ubiquitous --name ubiquitous --node-count 3 --generate-ssh-keys --node-vm-size Standard_B2s
The cluster name can be anything and I chose to use my company name, which matches with the resource group name. As you can see I specified three nodes for my cluster and B2s VM size.
The process is quite lengthy so leave it to run for at least 20 minutes or more. At the end, it will produce a JSON output, which you might record but I have not used it.
In order to be able to use kubectl with your new cluster, you need to add the cluster context to your
.kube/config file. It can be easily done using Azure CLI.
az aks get-credentials -g ubiquitous -n ubiquitous
Here,
-g is the shortcut for
--group and
-n — for
--name , where you specify the name of your cluster.
The next thing is to install GitLab CE. The best way of installing it on Kubernetes is to use the Omnibus Helm chart. If you don’t know about Helm, learn more here.
Note: it will be replaced by a new “cloud ready” chart, which is in alpha stage now.
First things first, and if you don’t have Helm, you need to install it first. Installation is different per platform, so the best way to do it is to refer to the installation instructions.
After Helm is installed, we need to install Tiller, the Helm agent in Kubernetes. It is very easy to do by running this command:
helm init
Since you are already have your current Kubernetes context switched to the new cluster, it will just go there and install Tiller in the default namespace.
Then, we need to add GitLab Helm repository:
helm repo add gitlab
Before installing, remember that GitLab will use kube-lego to create SSL for your Ingress using Let’s Encrypt. But you need to have a domain, which you will use for your GitLab instance. You also need to control DNS for this domain. This is because it is necessary to add a wildcard DNS entry and point it to Ingress, which will be configured by GitLab.
Here, you can either get an external IP address for your cluster in advance, create a DNS entry and specify this address in the Helm deployment command. Or, as I did, rely on Azure to assign the IP address but in this case you will have just a few minutes to create the DNS entry, since you need to do it as soon as Ingress gets the address but before GitLab deployment is finished. This is the way I used.
So, the installation is executed by this command (remember to use your own domain name):
helm install --name gitlab --set baseDomain=app.ubiquitous.no,gitlab=ce,legoEmail=dummy@ubiquitous.no,provider=acs gitlab/gitlab-omnibus
As you can see, I chose to use the
app.ubiquitous.no subdomain, so I needed to add a DNS entry for it.
In order to see whether the IP address is already available, I used this command:
kubectl get svc -w --namespace nginx-ingress nginx
At first, it produces output, where the
EXTERNAL_IP will be shown as
<pending> but let it run for a few minutes until a new line will appear and show the newly assigned external IP address.
Now, quickly add an
A DNS entry for your (sub)domain and use this new address. In my case, the DNS entry was:
A *.app.ubiquitous.no 23.96.2.189
I intentionally expose the real details here since it is a test cluster with non-production data. In addition, it is quite secure.
At this moment, the GitLab instance is still being deployed. Azure is not very fast on completing persistent volume claims and before this is done, GitLab pods will not be operational. It took about 20 minutes in my case for everything to be set up.
Now, after all is done, I got a GitLab CE instance running on Kubernetes cluster in Azure. Helm deployment told Ingress to use a few host names:
gitlab ,
mattermost and
registry . These host names are for the subdomain specified for the installation. So in my case, I am accessing GitLab by going to. When I went there the first time, I was able to specify the new password for the root user, so my advice is to log in the first time as fast as you can.
What is included by default? Well, quite a lot:
- GitLab CE itself, which hosts your Git repositories and handles issues and wikis.
- GitLab Runner for Kubernetes, which is used to build and deploy your applications using the CI/CD pipeline.
- GitLab Docker Registry, allowing you to host Docker images in individual Docker repositories for each project.
- Mattermost chat system.
- Prometheus time-series database, which will collect metrics for GitLab itself and for pods that the CI pipeline will deploy.
Hence that Kubernetes integration is enabled and configured out of the box, so you can use their default Auto Deploy CI/CD configuration and start building and deploying your apps to Kubernetes if you have a Dockerfile in your repository or your app can be built using one of default Heroku buildpacks. Read more about Auto Deploy here.
Prometheus is not exposed to the outside world, so if you want to reach it, you can use this command:
kubectl port-forward svc/gitlab-gitlab 9090:9090
to set up port forwarding to the GitLab service, then open to reach the Prometheus UI.
Later, I will describe how to use Auto Deploy in GitLab to build and deploy ASP.NET Core applications.
Finally, if some configuration for GitLan is required, you need to change the Ruby file that GitLab sues for configuration. Since configuration is stored on a persistent volume, it can safely be changed and the changes will remain after the pod or the whole cluster restarts.
But since there are neither root login nor SSH certificate for the pod, the easiest way to configure GitLab is to use kubectl exec. In order to use it, you have to know the name of the pod where GitLab runs. To do this, you can list all pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
gitlab-gitlab-764bd7665-w94t6 1/1 Running 1 22h
gitlab-gitlab-postgresql-5fff4f67bb-lp74p 1/1 Running 0 22h
gitlab-gitlab-redis-6c88945d56-gm5qc 1/1 Running 0 22h
gitlab-gitlab-runner-f7c85548-bl8c5 1/1 Running 7 22h
You can see some pods to run Postgresql, Redis, the GitLab Runner, but the first pos is the one we need. So, if we run this command:
kubectl exec -it gitlab-gitlab-764bd7665-w94t6 -- /bin/bash
we will get the pod bash prompt in sudo mode. From there everything is easy. To change the configuration, just edit the Ruby file:
vim /etc/gitlab/gitlab.rb
You can of course use something like nano if you aren’t familiar with vim.
When all changes are done, save the file, quit the editor and run these commands:
gitlab-ctl reconfigure
gitlab-ctl restart
Happy GitLabbing! | https://alexey-zimarev.medium.com/deploy-gitlab-ce-on-a-new-azure-kubernetes-cluster-9251100df5d7?source=post_page-----9251100df5d7-------------------------------- | CC-MAIN-2022-40 | refinedweb | 1,580 | 68.81 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.