text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Whenever you're utilizing a component architecture, as your application grows, the ability to share state amongst different components will inevitably become an issue.
Let's pretend we had an app with the following architecture, each circle representing a different component.
Now let's pretend that we had a piece of state that was needed throughout various levels of our application.
The recommended solution for this problem is to move that state up to the nearest parent component and then pass it down via props.
This works, and most of the time it's the right solution. However, there are times when passing props through intermediate components can become overly redundant or downright unmanageable. Take a tool like React Router for example. React Router needs to have the ability to pass routing props to any component in the component tree, regardless of how deeply nested the components are. Because this is such a significant problem, React comes with a built-in API to solve it called Context.
Context provides a way to pass data through the component tree without having to pass props down manually at every level. - The React Docs
Now that we know the problem that Context solves, how do we use it?
The Context API
For our example, let's say we're building an app that is used by both English and Spanish speaking countries. We want to expose a button that when it's clicked, can toggle the text of our entire application between English and Spanish.
From a high level, if you think about what's needed to solve this problem, there are two aspects to it.
We need a way to declare the data that we want available throughout our component tree. In our example, that data is a
localevalue that will be either
enor
es.
We need a way for any component in the component tree that requires that data to be able to subscribe to it.
React gives us the ability to do both of those things whenever we create a new Context using the
React.createContext method. Typically, you create a new Context for each unique piece of data that needs to be available throughout your component tree. Based on our example, we'll create a
LocaleContext.
const LocaleContext = React.createContext()
Now if we examine our
LocaleContext, you'll notice that it has two properties, both of which are React components,
Provider, and
Consumer.
Provider allows us to "declare the data that we want available throughout our component tree".
Consumer allows "any component in the component tree that needs that data to be able to subscribe to it".
Provider
You use
Provider just like you would any other React component. It accepts a
value prop which is the data that you want available to any of its
children who need to consume it.
<MyContext.Provider value={data}><App /></MyContext.Provider>
In our example, we want
locale to be available anywhere in the component tree. We also want to update the UI (re-render) whenever it changes, so we'll stick it on our component's state.
// LocaleContext.jsimport React from "react"const LocaleContext = React.createContext()export default LocaleContext
import React from 'react'import LocaleContext from './LocaleContext'export default function App () {const [locale, setLocale] = React.useState('en')return (<LocaleContext.Provider value={locale}><Home /></LocaleContext.Provider>)}
Now, any component in our component tree that needs the value of
locale will have the option to subscribe to it using
LocaleContext.Consumer.
Consumer
Again, the whole point of the
Consumer component is it allows you to get access to the data that was passed as a
value prop to the Context's
Provider component. To do this,
Consumer uses a render prop.
<MyContext.Consumer>{(data) => {return (<h1>The "value" prop passed to "Provider" was {data}</h1>)}}</MyContext.Consumer>const data = useContext(MyContext)return (<h1>The "value" prop passed to "Provider" was {data}</h1>)
Now in our example, because we passed
locale as the
value prop to
LocaleContext.Provider, we can get access to it by passing
LocaleContext.Consumer a render prop.
// Blog.jsimport React from 'react'import LocaleContext from './LocaleContext'export default function Blog () {return (<LocaleContext.Consumer>{(locale) => <Posts locale={locale} />}</LocaleContext.Consumer>)}
Updating Context State
At this point, we've seen that because we wrapped our whole app in
<LocaleContext.Provider value={locale}>, any component in our application tree can get access to
locale by using
LocaleContext.Consumer. However, what if we also want to be able to toggle it (
es) from anywhere inside of our component tree?
Your first intuition might be to do something like this.
export default function App () {const [locale, setLocale] = React.useState('en')const toggleLocale = () => {setLocale((locale) => {return locale === 'en' ? 'es' : 'en'})}return (<LocaleContext.Provider value={{locale,toggleLocale}}><Home /></LocaleContext.Provider>)}
What we've done is added a new property to the object we pass to
value. Now, anywhere in our component tree, using
LocaleContext.Consumer, we can grab
locale OR
toggleLocale.
Sadly, the idea is right, but the execution is a little off. Can you think of any downsides to this approach? Hint, it has to do with performance.
Just like React re-renders with prop changes, whenever the data passed to
value changes, React will re-render every component which used
Consumer to subscribe to that data. The way in which React knows if the data changes is by using "reference identity" (which is kind of a fancy way of saving
oldObject ===
newObject).
Currently with how we have it set up (
value={{}}), we're passing a new object to
value every time that
App re-renders. What this means is that when React checks if the data passed to
value has changed, it'll always think it has since we're always passing in a new object. As a result of that, every component which used
Consumer to subscribe to that data will re-render as well, even if
locale or
toggleLocale didn't change.
To fix this, instead of passing a new object to
value every time, we want to give it a reference to an object it already knows about. To do this, we can use the
useMemo Hook.
export default function App () {const [locale, setLocale] = React.useState('en')const toggleLocale = () => {setLocale((locale) => {return locale === 'en' ? 'es' : 'en'})}const value = React.useMemo(() => ({locale,toggleLocale}), [locale])return (<LocaleContext.Provider value={value}><Home /></LocaleContext.Provider>)}
React will make sure the
value that
useMemo returns stays the same unless
locale changes. This way, any component which used
Consumer to subscribe to our
locale context will only re-render if
locale changes.
Now, anywhere inside of our component tree, we can get access to the
locale value or the ability to change it via
toggleLocale.
// Blog.jsimport React from 'react'import LocaleContext from './LocaleContext'export default function Blog () {return (<LocaleContext.Consumer>{({ locale, toggleLocale }) => (<React.Fragment><Nav toggleLocal={toggleLocale} /><Posts locale={locale} /></React.Fragment>)}</LocaleContext.Consumer>)}
Here's a link to the full
locale app if you want to play around with it. Admittedly, it's not the best use of Context as it's a pretty shallow app, but it gives you the general idea how to use Context in an app with multiple routes/components.
defaultValue
Whenever you render a
Consumer component, it gets its value from the
value prop of the nearest
Provider component of the same Context object. However, what if there isn't a parent
Provider of the same Context object? In that case, it'll get its value from the first argument that was passed to
createContext when the Context object was created.
const MyContext = React.creatContext('defaultValue')
And adapted to our example.
const LocaleContext = React.createContext('en')
Now, if we use
<LocaleContext.Consumer> without previously rendering a
<LocaleContext.Provider>, the value passed to
Consumer will be
Here's a very clever example my good friend chantastic came up with. I've modified it a bit, but the core idea is his.
import React from 'react'import ReactDOM from 'react-dom'const ExpletiveContext = React.createContext('shit')function ContextualExclamation () {return (<ExpletiveContext.Consumer>{(word) => <span>Oh {word}!</span>}</ExpletiveContext.Consumer>)}function VisitGrandmasHouse () {return (<ExpletiveContext.Provider<h1>Grandma's House 🏡</h1><ContextualExclamation /></ExpletiveContext.Provider>)}function VisitFriendsHouse () {return (<React.Fragment><h1>Friend's House 🏚</h1><ContextualExclamation /></React.Fragment>)}function App () {return (<React.Fragment><VisitFriendsHouse /><VisitGrandmasHouse /></React.Fragment>)}
Can you follow what's going on? First, we create a new
ExpletiveContext and set its default value to
shit. Then we render two components,
VisitFriendsHouse and
VisitGrandmasHouse.
Because we're allowed to swear at our friend's house,
VisitFriendsHouse renders
ExpletiveContext.Consumer whose value will default to
shit since there's not an
ExpletiveContext.Provider in the tree above it.
Unlike at our friends, with Grandma, we're not allowed to swear. So instead of just rendering
ExpletiveContext.Consumer, we wrap it in
ExpletiveContext.Provider passing it a value of
poop. This way when the
Consumer looks for its nearest
Provider, it'll find it and get a value of
poop rather than the default value of
shit.
useContext
At this point, you've seen that in order to get access to the data that was passed as a
value prop to the Context's
Provider component, you use
Consumer as a render prop.
export default function Nav () {return (<LocaleContext.Consumer>{({ locale, toggleLocale }) => locale === "en"? <EnglishNav toggleLocale={toggleLocale} />: <SpanishNav toggleLocale={toggleLocale} />}</LocaleContext.Consumer>);}
This works, but as always the render-props syntax is a little funky. The problem gets worse if you have multiple context values you need to grab.
export default function Nav () {return (<AuthedContext.Consumer>{({ authed }) => authed === false? <Redirect to='/login' />: <LocaleContext.Consumer>{({ locale, toggleLocale }) => locale === "en"? <EnglishNav toggleLocale={toggleLocale} />: <SpanishNav toggleLocale={toggleLocale} />}</LocaleContext.Consumer>}</AuthedContext.Consumer>)}
Oof. Luckily for us, there's a Hook that solves this problem -
useContext.
useContext takes in a Context object as its first argument and returns whatever was passed to the
value prop of the nearest
Provider component. Said differently, it has the same use case as
.Consumer but with a more composable API.
export default function Nav () {const { locale, toggleLocale } = React.useContext(LocaleContext)return locale === 'en'? <EnglishNav toggleLocale={toggleLocale} />: <SpanishNav toggleLocale={toggleLocale} />}
As always, this API really shines when you need to grab multiple values from different Contexts.
export default function Nav () {const { authed } = React.useContext(AuthedContext)const { locale, toggleLocale } = React.useContext(LocaleContext)if (authed === false) {return <Redirect to='/login' />}return locale === 'en'? <EnglishNav toggleLocale={toggleLocale} />: <SpanishNav toggleLocale={toggleLocale} />}
Warnings
Here's the thing, when you're a hammer, everything looks like a nail. Typically when you first learn about Context, it appears like it's the solution to all your problems. Just remember, there's nothing wrong with passing props down multiple levels, that's literally how React was designed. I don't have a universal rule for when you should and shouldn't use Context, just be mindful that it's common to overuse. | https://ui.dev/react-context | CC-MAIN-2022-21 | refinedweb | 1,806 | 50.73 |
SBB takes great care to setup its schedule. Making sure trains are always accessible to people that need them is not an easy task. Ensuring all constraints are met means this task is even more demanding. Limited network capacity, fixed fleet size, and labour laws need to be considered. All these factors mean train scheduling is a complex optimization problem.
The carefully designed schedule works well if everything goes as planned. The network is at its full capacity, the fleet is available, and the people flows are as expected. But what happens if one of these variables changes? Can SBB handle the new situation under its schedule?
To ensure smooth people flow, SBB adds special trains to its schedule. Special trains are one-time connections that happen for a specific reason. Geneva Motor Show? OpenAir in St.Gallen? Long weekend when everyone travels to Italy? You will surely find special trains on those days.
To ensure the best operability, railways need maintenance. Performing maintenance on tracks may reduce the network capacity. Constructions don't only vary in time and space, but also in the capacity reduction to the network. Some of them will have minor impact. Others will result in closing all rail connections between two cities.
The regular train schedule satisfies hundreds of constraints. Adding special trains means putting additional weight on an already tight network. Planning construction sites means reducing the network capacity. Having special trains, and construction sites, scheduled at the same place and time may cause serious issues.
In this notebook we show how to identify lines where such conflicts occur. Having a clear picture of the conflicts will allow the scheduling team to make right decision and plan reliable service for everyday, and for special occasions.
First, let's import the dependencies. Theses functions:
import datetime import plotly.express as px import plotly.graph_objects as go from utils import (MapMode, create_network_graph, download_construction_sites, download_special_trains, filter_constructions, filter_trains, generate_map_layers, generate_network_trace, plot_network, plot_network_layer) import warnings warnings.filterwarnings('ignore')
Now, let's download the network data. We will use:
# Core data network = create_network_graph() all_trains = download_special_trains() all_constructions = download_construction_sites() network_trace = generate_network_trace(network)
Let's take a look at our network. Below you will find all railway nodes, and the connections between them. The line shapes are simplified.
# Network plot fig = plot_network(network_trace) fig.show() | https://notebooks.zazuko.com/sbb.html | CC-MAIN-2022-40 | refinedweb | 382 | 52.36 |
New Scrabble Wooden Pencils - 6 Pencils & Eraser
This item has been shown times.
New Scrabble Wooden Pencils - 6 Pencils & Eraser:
$15
NEW Scrabble Wooden Pencils - 6 Pencils & Eraser
Description | Sizing 6 high quality HB pencils with eraser and Scrabble detail, enclosed in a branded case.
Just the thing for noting down those triple word scores.
Pencil measures 18.5 x 5cm.
Measurement Guide:
[+] ZOOM - For a Full Sizing Chart,
please click here
[+] ZOOM For a Full Women's Size Conversions,
Please click here
[+] ZOOM For a Full Men's Size Conversions,
Please click here
*All Measurements are Approximate*Browse More from The Consignment Bag: General Information: Contact
Us PAYMENT
POLICY COMBINED
SHIPPING DOMESTIC
SHIPPING INTL
SHIPPING RETURN
POLICY
Customer Service:
or
Message The Consignment Bag and we will reply within 24 hours
*We will be happy to answer any questions you may have after reading the item's description or our terms.*
Stay Connected with The Consignment Bag:
Part of The Shopping Bag Brands
We accept all forms of PayPal (American Express, Discover, eCheck, MasterCard, Visa and PayPal transfers) through checkout only. If payment is not received within 5 days, a nonpayment dispute will be filed and item(s) will no longer be available for purchase, no exceptions.
In no event shall The Consignment Bag be liable for any special, exemplary, consequential damages or loss of profits, even if advised of the possibility of such.
In order for items to qualify for combined shipping, items must be purchased and paid for together within the 5 day period from the day and time of the first sale won.
Please wait to pay for all items together if it is wanted for all items to be shipped together. Items paid separately will be shipped separately. 24-48 hours after payment has been received, excluding weekends and holidays.
Shipping Rates:
Standard Shipping: $2.99
Express Mail: $24.99
Shipping Information:
Our main carrier is USPS.
Standard Shipping Delievery: 4-8 business days, most of the time you will get it sooner.
Expedited Delivery: 1-2 business days
WE SHIP WORLDWIDE!
Shipping Rates:
International Shipping and Handling: $9.99 US.
International Buyer: Please Message Us If You Need Expedited Delivery.
Shipping Information
Our main carrier is USPS.
International shipping, when you receive your package depends on how long the custom process. We will process your order in 1-2 business days. Shipments will be declared the value paid. Legally, we are not able to mark any shipments as a gift.
Legally, we are not able to mark any shipments as a gift. Combined shipping is available, please see "Combined Shipping Policy" information tab.
If package is not cleared/claimed from customs and is returned, buyer is responsible for postage of re-shipping.
We currently only provide tracking info for domestic shipping. For international shipping, if you want to track your package, please message us before purchase, so we will provide a quote on shipping upgrade.
The Consignment Bag wants you to absolutely love your purchase, but we know that from time to time what looks good in a picture simply won't do. With that in mind, we gladly accept return requests within 14 days (Calendar days) from receiving your item.
We kindly ask our buyers to read the entire item description and measurements before purchasing.
If you would like to return your item, kindly contact our customer service team via Messaging to request a RMA Number (Return Merchandise Authorization Number). Any non- correspondence will not be answered with the required RMA Number.
Please include the following details in your message:
Item Name
Item Number
Reason for return (i.e. size, color is not right, etc.)
*If the item received was defective, please contact The Shopping Bag via messaging for different instructions*
The Consignment Bag will reply within 24 hours with the required RMA number and return shipping information. Shipping costs for unwanted items are your responsibility.
Once the item is received with all original packaging, tags and accessories, The Shopping Bag staff will inspect the product and provide a full or partial refund. All new with tags items purchased must be returned as new with tags or no refund will be granted. All pre-owned items have been photographed, inspected and noted by our consignment staff, therefore any new flaws upon return will not be granted a refund.
All returns must have tracking information sent to The Shopping Bag through messaging within three days of receiving the RMA number. Items valued over $200 must be insured along with tracking information. Your returned item must be post marked within 14 Days (Calendar Days) of receiving your item from The Shopping Bag.
The Shopping Bag WILL NOT Accept the Following Items for a Return:
Cosmetics (i.e. Makeup, Nail Polish, Soaps, Shower Caps, Cosmetic Tools), Food Products, Pieced Earrings, Food Accessories, Perfume, Air Fresheners, Underwear, Hair Accessories.
We welcome inquiries prior to your purchase to ensure you are getting exactly what you want!© 2012, The Consignment Bag
New Scrabble Wooden Pencils - 6 Pencils & Eraser:
$15
Three (3x) Palomino Blackwing Pencil 602 3pc, Firm Graphite Bonus Erasers
Palomino Blackwing Long Point Sharpener Made In Germany - Ships From California
Lot Of 81 Vintage Pencils Dixon Anadel Leadfast Thinex Venus 1950 1960 Usa
Angry Birds Pencil Toppers Two Blue Birds Plus Mystery Bird Set Of 3
6 Colorful Stripe Pen Magic Bendy Flexible Soft Pencil With Eraser Kids Writing
| http://www.holidays.net/store/New-Scrabble-Wooden-Pencils-6-Pencils-Eraser_200968373321.html | CC-MAIN-2017-26 | refinedweb | 901 | 61.67 |
W3C::SOAP::Manual::XSD
W3C::SOAP::XSD::Parser maps XMLSchema Documents to perl/Moose objects. There are some limitations in the way this is done but the basic mapping is:
A common class is created that represents the whole schema,
this class includes all other classes that are created.
The attributes of this class are the
xs:elements of the XSD
These are mapped to attributes of the XSD
These are mapped to MooseX::Types and located in the {BaseClass}::Types module. They then need to be imported into modules that need them.
These are mapped to Moose classes under the {BaseClass} namespace | http://search.cpan.org/dist/W3C-SOAP/lib/W3C/SOAP/Manual/XSD.pod | CC-MAIN-2014-41 | refinedweb | 103 | 61.06 |
Python Regular Expressions or Python RegEx are patterns that permit us to ‘match’ various string values in a variety of ways.
A pattern is simply one or more characters that represent a set of possible match characters. In regular expression matching, we use a character (or a set of characters) to represent the strings we want to match in the text. Are you interested in learning Python from experts? Enroll in our Python Course in Bangalore now!
In this Python RegEx tutorial, we will learn all the important aspects of Regular Expression or RegEx in Python, covering the following topics:
Let us get started then.
Learn more about Python from this Python for Data Science Course to get ahead in your career!
The match function matches the Python RegEx pattern to the string with optional flags. Syntax:
re.match(pattern, string, flags=0)
Where ‘pattern’ is a regular expression to be matched, and the second parameter is a Python String that will be searched to match the pattern at the starting of the string.
Example of Python regex match: is the address at which it was created.
import reprint re.match(“b”, “intellipaat”)
Output:
None
Six most important sequence characters are:
Interested in learning Python? Enroll in our Python Course in London now!
It searches for the primary occurrence of a Regular Expression pattern within a string with optional flags.
Syntax:
re.search(pattern, string, flags=0)
Example of Python regex search:
m = re.search(‘\bopen\b’, ‘please open the door’)
print m
Output:
None
This output so occurred because the ‘\b’ escape sequence is treated as a special backspace character. Meta characters are those characters which include ‘/’.
import re
m = re.search(‘\\bopen\\b’, “please open the door”)
print m
Output:
<-sre.SRE-Match object at 0x00A3F058>
Following table contains the list of all Python Regular Expression or Python RegEx modifiers, along with their descriptions.
This brings us to the end of this module about regular expression in python Tutorial. Here, we leant what Python 3 RegEx is, Regular Expression Characters in Python, Match Function of RegEx in Python, Special Sequence Characters of RegEx in Python, Search Function of RegEx in Python, also talked about Python Regex Modifiers. : * 29 + 19 = | https://intellipaat.com/blog/tutorial/python-tutorial/python-regex-regular-expressions/ | CC-MAIN-2020-29 | refinedweb | 371 | 63.59 |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
This tutorial program demonstrates how to use asio's asynchronous callback functionality by modifying the program from tutorial Timer.1 to perform an asynchronous wait on the timer.
#include <iostream> #include <boost/asio.hpp> #include <boost/date_time/posix_time/posix_time.hpp>
Using asio's asynchronous functionality means having a callback function
that will be called when an asynchronous operation completes. In this program
we define a function called
void print(const boost::system::error_code& /*e*/) { std::cout << "Hello, world!" << std::endl; } int main() { boost::asio::io_service io; boost::asio::deadline_timer t(io, boost::posix_time::seconds(5));
Next, instead of doing a blocking wait as in tutorial Timer.1, we call the
deadline_timer::async_wait()
function to perform an asynchronous wait. When calling this function we pass
the
t.async_wait(&print);
Finally, we must call the io_service::run() member function on the io_service object.
The asio library provides a guarantee that callback handlers will only be called from threads that are currently calling io_service::run(). Therefore unless the io_service::run() function is called the callback for the asynchronous wait completion will never be invoked.
The io_service::run() function will also continue to run while there is still "work" to do. In this example, the work is the asynchronous wait on the timer, so the call will not return until the timer has expired and the callback has completed.
It is important to remember to give the io_service some work to do before calling io_service::run(). For example, if we had omitted the above call to deadline_timer::async_wait(), the io_service would not have had any work to do, and consequently io_service::run() would have returned immediately.
io.run(); return 0; }
See the full source listing
Return to the tutorial index
Previous: Timer.1 - Using a timer synchronously
Next: Timer.3 - Binding arguments to a handler | http://www.boost.org/doc/libs/1_60_0/doc/html/boost_asio/tutorial/tuttimer2.html | CC-MAIN-2018-09 | refinedweb | 324 | 56.86 |
Question
Does a serial bond normally have only one maturity date? What types of bonds are normally issued on this basis?
Answer to relevant QuestionsWhy might the bond market be considered less efficient than the stock market? How do zero-coupon securities, such as Treasury strips, provide returns to investors? How are the returns taxed? Using the data in Table 11–6 on page 300, indicate the closing dollar value of the National City Corp. bonds that pay 4.9 percent interest and mature January 15, 2015. State your answer in terms of dollars based on a ...Explain the general meaning of the expectations hypothesis as it relates to the term structure of interest rates. Describe how yield to maturity is the same concept as the internal rate of return (or true yield) on an investment.
Post your question | http://www.solutioninn.com/does-a-serial-bond-normally-have-only-one-maturity-date | CC-MAIN-2016-44 | refinedweb | 139 | 60.01 |
A handy makefile for simple C/C++ applications
Easymake is a handy makefile for C/C++ applications on Linux system. For simple applications, you don’t even need to write a single line of makefile code to build your target with easymake.
Features description:
foo.h, all your source files with
#include "foo.h"will be re-compiled.
NOTICE: Easymake is designed to be easy to use on simple applications, not as a highly flexible or extensible template. If you want more customization, you might need to look for a small and simple example for start.
git clone cd easymake/samples/basics cp ../../easymake.mk Makefile make ./bin/add # if you rename add.cpp to myprogram.cpp, then you get ./bin/myprogram.cpp
Files with
*_test.cppor
*_test.cpattern will be used for testing (inspired by golang).
Easymake is trying to follow the Makefile Conventions (1) (2). The following options are supported.
CFLAGSExtra flags to give to the C compiler.
CXXFLAGSExtra flags to give to the C++ compiler.
LDFLAGSExtra flags to give to compilers when they are supposed to invoke the linker
LDLIBSLibrary flags or names given to compilers when they are supposed to invoke the linker
ARFLAGSFlags to give the archive-maintaining program; default
cr
In the GIFs, I simply copy
easymake.mkinto my souce code directory as a makefile. However, for code simplicity, I recommend the following style:
CXXFLAGS=-O2
other options
...
include /path/to/easymake.mk | https://xscode.com/roxma/easymake | CC-MAIN-2021-21 | refinedweb | 239 | 61.02 |
Using Houdini Paint API to Render a 3D Model
“That’s f*cked up”
— Daniel Appelquist, co-chair of the W3C Technical Architecture Group
The demo in this article uses the Houdini paint API, which is a part of the larger Houdini spec. Houdini is still coming to browsers but is ready to try out.
In this article I hope to inspire you to try out the Houdini Paint API. It’s really fun, and you can see just how powerful it is. Here I do something which looks cool but isn’t the most optimal way to show 3D models in the Web, but I hope it inspires you to see how far you can take the Houdini APIs.
What I made here was a stupid idea that went too far, but it worked! My thought process went like so:
- The Houdini Paint APIs are kind of like the Canvas 2D APIs
- THREE.js used to have a canvas renderer
- With tree shaking the Houdini Paint Worklet won’t be so big.
- I can control it with CSS Custom Properties!
Amazingly it worked:
Why is it a bad idea?
The Houdini paint worklet uses the CPU to render to the canvas, rendering a full 3D model is an expensive process to do without a graphics card. Fortunately worklets run on a seperate CPU thread so shouldn’t slow down the website too much, but if you repaint the object too much it may make the device’s fans spin up.
If you want to render 3D models in the Web use WebGL, it is what it is there for. WebGL is a lot more performant for rendering 3D scenes and will give a lot neater results.
Worklets can’t fetch resources from the network, so everything including the 3D model has to be baked into the Worklet script itself so can’t be changed on the fly.
The final worklet including the 3D model was 1100kb after minification!! It would be more efficient over the network to just use a video tag.
Constraints of the Houdini Paint Worklet
Everything in the PaintWorklet happens synchronously. It can’t access the state of the document nor can it use the network to asynchronously load resources. It also can’t import scripts either, everything has to be inlined.
So if you wanted to use a different 3D model you would have to recompile the whole script again.
Rendering is tied to the paint callback function of the worklet and cannot be called from within the worklet. So no setTimeout or requestAnimationFrame, or animation.
Breaking down how it works
The key element which makes this project work is the JavaScript bundler rollup. Rollup is a package bundler for JavaScript files which use ecmascript modules (es-modules). Rollup allowed me to combine packages from npm, json files and local packages with tree shaking to remove unused modules keeping the size smaller.
Fortunately newer builds of Three.js are built using ecmascript modules to allow developers to take advantage of this tree shaking behaviour. Unfortunately the old Three.js PaintRenderer used to do 3D graphics on a HTML Canvas had been removed before then.
To get around this I ported PaintRenderer and another module from an older THREE.js to use es-modules which allowed me to use only the bits I needed with a newer build of THREE.js to render the scene to canvas. Which was about ~500kb, whilst still large it is a significant saving compared to the full THREE.js.
Importing the 3D Model
This was tricky to get right and took some trial and error with different 3D model formats to get right. This was my final technique:
- Download a Low Polygon 3D model from Google Blocks as a Triangulated OBJ. I used this model by Linus Ekenstam:
- Add it to a new scene on
- Export the scene as a Three.js JSON
- I then imported this JSON file using the Rollup json loader.
import * as campfire from './scene.json';
- Then I parsed this with Three.js and it was ready to use
const loader = new ObjectLoader();
const camp = loader.parse( campfire );
- I had to tweak it a little bit to make it to look good
const floorName = "mesh1292612855";
const floor = camp.getObjectByName( floorName, true );
floor.renderOrder = -1;
camp.position.y = -3;
camp.rotation.y = -Math.PI/2;
Now everything is imported we are ready to render it.
Setting up the Houdini Pain Worklet
To register a paint worklet in the worklet use the registerPaint function. Below we register a paint function called “three”.
registerPaint( "three",
class {
static get inputProperties() {
return [];
} paint(ctx, size, props) {
camera.aspect = size.width / size.height;
camera.updateProjectionMatrix();
renderer.setContext(ctx);
renderer.setSize(size.width, size.height);
renderer.render(scene, camera);
}
}
);
In the paint method we have 3 arguments.
ctx is a drawing context very similar to the
CanvasRenderingContext2D you would get from a Canvas, although some methods are missing.
size provides the width and height of the element you are drawing to.
props is a map which provides access to the CSS custom properties requested from
inputProperties.
In the paint function each render I update the camera to handle the new width and height of the element. I set the context of the renderer to the
ctx to tell THREE.js to render there.
Now this is added we are ready to use the Worklet in CSS. This is how we apply it to an element:
main {
background-image: paint(three);
}
We use
paint(workletName) to draw tell CSS to use this worklet for the background image.
Controlling the Scene in Realtime
Even though all the assets have to be baked in we can provide the user some amount of control by responding to certain custom CSS properties.
In this example we will listen for rotations in the X,Y and Z axis. To do this add the CSS properties to the
inputProperties array:
static get inputProperties() {
return ["--rotate-x", "--rotate-y", "--rotate-z"];
}
You can register properties to define their type but to keep these simple we won’t do that here, because they are unregistered they get exposed as strings because CSS does not know how to handle them.
We will use them to rotate the 3D model, here I convert each rotation from a string in degrees to a number in radians so it can be used with THREE.js.
group.rotation.set(
Math.PI * Number(props.get("--rotate-x"))/180,
Math.PI * Number(props.get("--rotate-y"))/180,
Math.PI * Number(props.get("--rotate-z"))/180
);
I can then set these properties to change the rotation in CSS:
main {
--rotate-x: 10;
--rotate-y: 90;
--rotate-z: -50;
}
I can even set them dynamically with JavaScript:
document.addEventListener('mousemove', function (e) { document.body.style.setProperty(
'--rotate-y',
30 * ((e.screenX / document.body.clientWidth) - 0.5)
) document.body.style.setProperty(
'--rotate-x',
30 * ((e.screenY / document.body.clientHeight) - 0.5)
)})
Be careful with animations
Animating the worklet like this is fun but it causes an expensive repaint every time. In fact animating any thing on this element which triggers paint will cause an expensive paint operation.
Have fun!
I hope this article inspires you to have a play with Houdini paint. It is fun to do and is one of a number of Houdini APIs allowing us to extend CSS to make it even more powerful. | https://medium.com/samsung-internet-dev/using-the-css-houdini-paint-api-to-show-a-3d-model-79b2a4e69255?source=collection_archive---------6----------------------- | CC-MAIN-2019-30 | refinedweb | 1,235 | 63.39 |
libpfm — a helper library to develop monitoring tools
Synopsis
#include <perfmon/pfmlib.h>
Description
This is a helper library used by applications to program specific performance monitoring events. Those events are typically provided by the hardware or the OS kernel. The most common hardware events are provided by the Performance Monitoring Unit (PMU) of modern processors. They can measure elapsed cycles or the number of cache misses. Software events usually count kernel events such as the number of context switches, or pages faults.
The library groups events based on which source is providing them. The term PMU is generalized to any event source, not just hardware sources. The library supports hardware performance events from most common processors, each group under a specific PMU name, such as Intel Core, IBM Power 6.
Programming events is usually done through a kernel API, such as Oprofile, perfmon, perfctr, or perf_events on Linux. The library provides support for perf_events which is available in the Linux kernel as of v2.6.31. Perf_events supports selected PMU models and several software events.
At its core, the library provides a simple translation service, whereby a user specifies an event to measure as a string and the library returns the parameters needed to invoke the kernel API. It is important to realize that the library does not make the system call to program the event.
Note: You must first call pfm_initialize() in order to use any of the other provided functions in the library.
A first part of the library provides an event listing and query interface. This can be used to discover the events available on a specific hardware platform.
The second part of the library provides a set of functions to obtain event encodings form event strings. Event encoding depends primarily on the underlying hardware but also on the kernel API. The library offers a generic API to address the first situation but it also provides entry points for specific kernel APIs such as perf_events. In that case, it is able to prepare the data structure which must be passed to the kernel to program a specific event.
Event Detection
When the library is initialized via pfm_initialize(), it first detects the underlying hardware and software configuration. Based on this information it enables certain PMU support. Multiple events tables may be activated.
It is possible to force activation of a specific PMU (group of events) using an environment variable.
Event Strings
Events are expressed as strings. Those string are structured and may contain several components depending on the type of event and the underlying hardware.
String parsing is always case insensitive.
The string structure is defined as follows:
[pmu::][event_name][:unit_mask][:modifier|:modifier=val]
or
[pmu::][event_name][.unit_mask][.modifier|.modifier=val]
The components are defined as follows:
- pmu
Optional name of the PMU (group of events) to which the event belongs to. This is useful to disambiguate events in case events from difference sources have the same name. If not specified, the first match is used.
- event_name
The name of the event. It must be the complete name, partial matches are not accepted. This component is required.
- unit_mask
This designate an optional sub-events. Some events can be refined using sub-events. Event may have multiple unit masks and it may or may be possible to combine them. If more than one unit masks needs to be passed, then the [:unit_mask] pattern can be repeated.
- modifier
A modifier is an optional filter which modifies how the event counts. Modifiers have a type and a value. The value is specified after the equal sign. No space is allowed. In case of boolean modifiers, it is possible to omit the value true (1). The presence of the modifier is interpreted as meaning true. Events may support multiple modifiers, in which case the [:modifier|:modifier=val] pattern can be repeated. The is no ordering constraint between modifier and unit masks. Modifiers may be specified before unit masks and vice-versa.
Environment Variables
It is possible to enable certain debug features of the library using environment variables. The following variables are defined:
- LIBPFM_VERBOSE
Enable verbose output. Value must be 0 or 1.
- LIBPFM_DEBUG
Enable debug output. Value must be 0 or 1
- LIBPFM_DEBUG_STDOUT
Redirect verbose and debug output to the standard output file descriptor (stdout). By default, the output is directed to the standard error file descriptor (stderr).
- LIBPFM_FORCE_PMU
Force a specific PMU model to be activated. In this mode, only that one model is activated. The value of the variable must be the PMU name as returned by the pfm_get_pmu_name() function. Note for some PMU models, it may be possible to specify additional options, such as specific processor models or stepping. Additional parameters necessarily appears after a comma. For instance, LIBPFM_FORCE_PMU=amd64,16,2,1.
- LIBPFM_ENCODE_INACTIVE
Set this variable to 1 to enable encoding of events for non detected, but supported, PMUs models.
- LIBPFM_DISABLED_PMUS
Provides a list of PMU models to disable. This is a comma separated list of PMU models. The PMU model is the string in name field of the pfm_pmu_info_t structure. For instance: LIBPFM_DISABLE_PMUS=core,snb, will disable both the Intel Core and SandyBridge core PMU support.
See Also
libpfm_amd64_k7(3), libpfm_amd64_k8(3), libpfm_amd64_fam10h(3), libpfm_intel_core(3), libpfm_intel_atom(3), libpfm_intel_p6(3), libpfm_intel_nhm(3), libpfm_intel_nhm_unc(3), pfm_get_perf_event_encoding(3), pfm_initialize(3)
Some examples are shipped with the library | https://dashdash.io/3/libpfm | CC-MAIN-2020-34 | refinedweb | 889 | 58.38 |
SO, this is what I have:
I have a model which stores the lat and lng of a geographical location, call it location with its own database and table.
public class location { public int Id { get; set; } public int lat { get; set; } public int lng { get; set; } }
and I want a method in the line of:
function getDistance(location l){ return Math.sqrt(l.lat - this.lat).... etc etc. }
Where should this go? Probably not model? Does it belong to control? Since it's pretty universal, should make a control that is not associated with a view?
If you could suggest some reading, that would be nice too....
Thanks for the advice(s) in advance!
--------------Solutions-------------
Firstly, do what you want, it's your code, make your own mistakes, think about why things aren't working or are working for your problem domain.
But, typically.
If your object is a View then a common practice is to use a POCO (Plain old C#/CLR Object), that is a class that only has properties that you want to bind with your display. You can have as many views of the same model(s) as you like depending on your situation. But they don't normally include any logic.
EDIT:
The Model is where all your business logic goes. The thing with the frameworks people are using is that it encourages the building of CRUD applications which essentially are just an access database on the web. Your Model is not just the entity map to your database. Your Model may use some form of persistence but it isn't necessarily the only activity it performs.
The Controller coordinates action between the client and the Model. In MVC an operation might ask the Controller for another page of data. It's the Controller's responsibility the ask the Model (or a cache, security framework or anything else) to return the required information it needs to compose the View (the answer) to the question posed to the controller.
Putting it in the model would be just fine.
public class location
{
public int Id { get; set; }
public int lat { get; set; }
public int lng { get; set; }
public double? getDistance(location l)
{
if (l != null)
{
return Math.Sqrt(l.lat - this.lat).... etc etc.
}
return null;
}
}
This will return the distance from one instance of
location to another.
I know it isn't part of your question, but I think it's important to mention: it's pretty standard convention to capitalize the first letter of a class and method definition. | http://www.pcaskme.com/where-do-methods-of-an-object-go-in-mvc/ | CC-MAIN-2019-04 | refinedweb | 425 | 63.59 |
Issue -
I have a sql server that we have upgrade from SQL 2008 R2 to SQL 2012 SP1 and am attempting to install the AX 2012 R2 report extenstions. The pre-req checker recognizes both SQL 2012 and SQL 2008, even though SQL 2008 has been upgraded to SQL 2012 and no SQL 2008 instances are on the box anymore. How do we get past the pre-req checker, and what does it look at? I looked at the instance folder in the registry and the only instance (the default instance) references 11.0.3000.
In the dynamicssetuplog.txt file you will see this error -
Performing a Microsoft SQLServer Reporting Services existence check for prerequisite 'Microsoft SQL Server Reporting Services'.11.0.3000.0MSSQLSERVER*** ERROR ***Provider load failure Provider load failure *** END ERROR ***Check failed.Resolution: Install a supported version of the Reporting Services component of Microsoft SQL Server.
Resolution:
Use wbemtest tool to clean up the invalid instances.
See this blog for details -
I'll paste the response from that blog here as well -
The problem might be with incorrectly unistalled another instance of Reporting Services. I just solved it using these steps (modified version of a guy from MS):
1. Run command wbemtest
2. Click "Connect"and connect to “ROOT\Microsoft\SqlServer\ReportServer”(type this path in the Namespace textbox)
3. After connecting to the name space, click "Query" to query “SELECT * from __namespace”. There SHOULD be one data in the record window (while the server only have one RS instance).
4. Double-click the record, in the new properties windows, get the path value such as “\\<server>\ROOT\Microsoft\SqlServer\ReportServer:__NAMESPACE.Name="RS_MSSQLSERVER" ”
5. Now, close all sub dialogs, and then re-connect to “ROOT\Microsoft\SqlServer\ReportServer\RS_MSSQLSERVER”. "RS_MSSQLSERVER" is the Namespace.Name in step4
6. Query “SELECT * from __namespace” again.
7. Repeat step4,5, we will get the new path “ROOT\Microsoft\SqlServer\ReportServer\RS_MSSQLSERVER\v10”
8. Repeat step2 to 7 until there is no record in the Query result dialog. Finally, the path is \\server\ROOT\Microsoft\SqlServer\ReportServer\RS_MSSQLSERVER\v10\admin
If we can't get the final path \\server\ROOT\Microsoft\SqlServer\ReportServer\RS_MSSQLSERVER\v10\admin, it means the Reporting Services WMI provider is not installed correctly. If we can get to that path, there might be an orphaned instance of SSRS. Go back to step 2 and do these steps:
9. Click "Connect"and connect to “ROOT\Microsoft\SqlServer\ReportServer”
10. After connecting to the name space, click "Query" to query “SELECT * from __namespace”. If there is more than 1 record, and you have only 1 SSRS instance installed (or at least you think you have), click on all the other records, and hit "Delete" button.
11. Now restart SSRS Configuration Manager and you should connect without problems.
Worked wonderfully
Great work! Solved it for me - Thanks!
Work AWSOME
Thanks a lot, it is work for me after upgrade instance from 2008 r2 to 2012 | http://blogs.msdn.com/b/axsupport/archive/2013/02/21/ax-2012-pre-requisite-checker-sees-both-the-old-sql-and-the-upgraded-sql-instance.aspx?Redirected=true | CC-MAIN-2015-27 | refinedweb | 495 | 65.93 |
You may want to search:
Flower Tea Blooming Tea Jasmine Best Selling Products 2017 in USA
US $0.05-0.35 / Bag
7000 Bags (Min. Order)
Flower tea blooming tea jasmine best selling products 2017 in USA
US $15-32 / Kilogram
25 Kilograms (Min. Order)
Best China jasmine flowers in usa
US $2-2.5 / Kilogram
500 Kilograms (Min. Order)
Fresh Jasmine Flower Exporter for / Importers in Malaysia / Singapore / Dubai / Canada / USA/ Uk / France / Germany
US $0.6-0.9 / Foot
500 Feet (Min. Order)
Madurai Jasmine Flower for USA/Canada Importers
US $3-6 / Kilogram
100 Kilograms (Min. Order)
High quality China factory supply organic jasmine Dragon Pearl
US $20-50 / Kilogram
200 Kilograms (Min. Order)
Certified private label detox tea flower tea
US $12-50 / Kilogram
10 Kilograms (Min. Order)
2018 Certified EU Standard Jasmine Dragon Pearl Tea Wholesale Handmade Jasmine Pearls Tea
US $25-55 / Kilogram
2 Kilograms (Min. Order)
USA standard OEM herbal slim teabags organic jasmine puerh tea teabag
US $5.5-6 / Box
5 Boxes (Min. Order)
Best selling products 2016 in usa wholesale fresh cut flowers jasmine all types of fresh flowers jasmine all types of flowers ja
US $0.1-0.5 / Piece
1000 Pieces (Min. Order)
Alibaba china supplier jasmine flowers in usa
US $5-9 / Kilogram
500 Kilograms (Min. Order)
Fresh Jasmine Flower
100 Kilograms (Min. Order)
China Exported Jasmine Tea High Flavor
US $3.5-10 / Kilogram
200 Kilograms (Min. Order)
Factory selling fields and select tea flower tea
US $12-50 / Kilogram
100 Kilograms (Min. Order)
Jasmine flowers in USA
US $8-9 / Meter
100 Kilograms (Min. Order)
China cheap Green Tea Best Quality jasmine Dragon Pearl
US $20-50 / Kilogram
200 Kilograms (Min. Order)
Green Nature Pure Organic jasmine tea for america
US $11-12 / Kilogram
500 Kilograms (Min. Order)
cheap wholesale artificial winter jasmine flower for home decoration
US $0.15-2.3 / Piece
480 Pieces (Min. Order)
Lowest price factory supply healthyand organic jasmine tea
US $20-50 / Kilogram
200 Kilograms (Min. Order)
cheap beautiful high quality artificial winter jasmine flower
US $0.15-2.3 / Piece
1200 Pieces (Min. Order)
Mozambique market washing powder/high qulity detergent powder in usa
US $180-1200 / Metric Ton
1 Metric Ton (Min. Order)
concentrate Jasmine flavor Xian Taima USP grade fruit/flower/herb flavors for e concentrate liquid fruit flavorings
US $10-70 / Liter
1 Liter (Min. Order)
Hot Sale Oriental Beauty/Dong Fang Mei Ren Blooming Tea Including Marigold,Jasmine Flowers
US $0.24-0.49 / Piece
135 Pieces (Min. Order)
Bath Natural Soap Flower / Manufacturer Soap Handmade Skin Whitening Soap with Glycerin Goat Milk
US $0.15-0.25 / Piece
100000 Pieces (Min. Order)
Fresh jasmine flowers for USA
US $2-3 / Kilogram
100 Kilograms (Min. Order)
Handmade Air Freshener Essential Big Flower 10 ml Oil Diffuser
US $1.08-2.08 / Piece
3000 Pieces (Min. Order)
100% Pure Jasmine Grandiflorum Essential Absolute Oil
US $2450.0-2450.0 / Kilogram
1 Kilogram (Min. Order)
Custom white Jasmin flower metal pin,safety pin manufacturer China
US $0.2-0.8 / Piece
100 Pieces (Min. Order)
Jasmine Hydrosol Global Exporter
1 Kilogram (Min. Order)
Wholesale High Quality Pure Organic Jasmine Grandiflorum Absolute
US $2450.0-2450.0 / Unit
1 Unit (Min. Order)
Chinese Characteristic G20 Gift Tea Sunrise Blooming Flower Tea
US $32.9-40.8 / Kilogram
10 Kilograms (Min. Order)
import china brand name 700g laundry detergent powder
US $400-600 / Ton
13 Tons (Min. Order)
100% Pure and Natural Jasmine Flower Oil
US $20-100 / Kilogram
10 Kilograms (Min. Order)
Jasmine Oil Global Exporter
US $250-650 / Kilogram
1 Kilogram (Min. Order)
America litter bentonite clumping cat flower odor
US $180-260 / Ton
24 Tons (Min. Order)
Jasmine Essential Oil
US $100.66-100.66 / Milliliters
15 Milliliters (Min. Order)
good perfume jasmine shower gel body lotion bath and body care set in ceramic bathtub
US $1.8-2.2 / Piece
1000 Pieces (Min. Order)
Winter Jasmine Greeting Cards
5 Dozens (Min. Order)
Odyssey Cologne Spray by Avon
US $7.99-8.99 / Boxes
12 Boxes (Min. Order)
hot sale beautiful artificial mini jasmine
US $0.15-2.3 / Piece
1000 Pieces (Min. Order)
- About product and suppliers:
Alibaba.com offers 120 jasmine flowers in usa products. About 18% of these are flavor tea, 5% are slimming tea, and 3% are fresh cut flowers. A wide variety of jasmine flowers in usa options are available to you, such as gmp, fda, and imo. You can also choose from flower tea, herbal tea, and decorative flowers & wreaths. As well as from box, bag, and can (tinned). And whether jasmine flowers in usa is compressed tea, loose tea, or bagged tea. There are 119 jasmine flowers in usa suppliers, mainly located in Asia. The top supplying countries are China (Mainland), United States, and India, which supply 63%, 30%, and 5% of jasmine flowers in usa respectively. Jasmine flowers in usa products are most popular in North America, Domestic Market, and Eastern Europe. You can ensure product safety by selecting from certified suppliers, including 12 with Other, 8 with ISO9001, and 7 with GMP certification.
Buying Request Hub
Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE
Do you want to show jasmine flowers in usa or other products of your own company? Display your Products FREE now! | http://www.alibaba.com/showroom/jasmine-flowers-in-usa.html | CC-MAIN-2018-22 | refinedweb | 897 | 68.77 |
New to Typophile? Accounts are free, and easy to set up.
Hi all,
Is it possible to use RoboFab control the sidebearing of glyphs.
I was thinking something like this might work.
This code just prints the sidebearing - 20, what I really wanted to do is decrease the left sidebearing by 20.
from robofab.world import CurrentFont
f = CurrentFont()
A = f['A'].leftMargin
A-=20
print A
f.update()
8 Mar 2008 — 10:08am
Have you tried this?
print "before", f['A'].leftMargin
f['A'].leftMargin = f['A'].leftMargin -20
print "after", f['A'].leftMargin
8 Mar 2008 — 2:41pm
That's easy to do in Font Lab also. Select the glyphs you want to change, and then go to: actions/metrics/set side bearings.
8 Mar 2008 — 3:43pm
k.l.
Thanks that worked, still getting to grips with the RoboFab syntax
I think I was trying to do this with my first attempt.
from robofab.world import CurrentFont
f = CurrentFont()
f['A'].leftMargin -=20
f.update()
17 Apr 2012 — 1:53pm
How can I do the same, but using the 'measurement line'?
I mean: To measures the distance from, let say, the middle of the x-height instead if the first left point.
23 Apr 2012 — 1:07pm
Pablo, have a look at the marginPen module in RoboFab/pens. It will calculate the margins of a glyph at a requested height (or width)
link to marginPen.py on code.robofab.com
23 Apr 2012 — 3:53pm
Erik, thanks!
marginPen is awesome.
I'm working on a spacing macro, will release it shortly. | http://www.typophile.com/node/42955 | CC-MAIN-2015-14 | refinedweb | 264 | 78.65 |
09 February 2010 16:34 [Source: ICIS news]
TORONTO (ICIS news)--A legal group has withdrawn a private lawsuit brought in Germany against Kemira, Evonik and four other producers over alleged price fixing for hydrogen peroxide, it said on Tuesday, without disclosing further details.
The case was brought last year by Brussels-based CDC Cartel Damage Claims, which at the time said it had bought or had been assigned damage claims worth well over €430m ($590m) from pulp and paper firms related to a European hydrogen peroxide cartel from 1994 through 2000.
CDC was founded in 2002 to enforce private damage claims based on the infringement of national or international antitrust laws, according to its website.
A spokeswoman told ICIS news that CDC would prepare a response to media inquiries, but she would not immediately provide additional comment on why the case was withdrawn.
?xml:namespace>
The European Commission in 2006 fined Kemira, Arkema, AkzoNobel, Solvay and
($1 = €0.73)
For more on Evonik, Kemira. | http://www.icis.com/Articles/2010/02/09/9333144/cartel-lawsuit-against-germany-hydrogen-peroxide-producers-halted.html | CC-MAIN-2013-20 | refinedweb | 165 | 52.83 |
The QWSServer class provides server-specific functionality in Qt/Embedded. More...
#include <QWSServer>
Inherits QObject.().
This enum is used to pass various options to the window system server.
This specifies what sort of event has occurred to a top-level window:
Construct a QWSServer object.
Warning: This class is instantiated by QApplication for Qt/Embedded server processes. You should never construct this class yourself.
The flags are used for keyboard and mouse setting. The server's parent is parent.
Destruct QWSServer
Adds a filter f to be invoked for all key events from physical keyboard drivers (events sent via processKeyEvent()).
The filter is not invoked for keys generated by virtual keyboard drivers (events sent via sendKeyEvent()).
Keyboard filters are removed by removeKeyboardFilter().
Returns the list of top-level windows. This list will change as applications add and remove wigdets so it should not be stored for future use. The windows are sorted in stacking order from top-most to bottom-most.
Closes keyboard device(s).
Closes the pointer device(s).
If e is true, painting on the display is enabled; if e is false, painting is disabled.
Returns true if the cursor is visible; otherwise returns false.
See also setCursorVisible().
Returns the keyboard mapping table used to convert keyboard scancodes to Qt keycodes and Unicode values. It's used by the keyboard driver in qkeyboard_qws.cpp.
Returns the primary keyboard handler.
See also setKeyboardHandler().
Returns the QWSPropertyManager, which is used for implementing X11-style window properties.
This signal is emitted whenever some text is selected. The selection is passed in text.
Returns the primary mouse handler.
This signal is emitted when the QCopChannel channel is created.
Opens the keyboard device(s).
Opens the mouse device(s).
Refreshes the entire display.
This is an overloaded member function, provided for convenience. It behaves essentially like the above function.
Refreshes the region r.
Removes and deletes the most-recently added filter. The caller is responsible for matching each addition with a corresponding removal.
This signal is emitted immediately after the QCopChannel channel is destroyed. Note that a channel is not destroyed until all its listeners have unregistered.
Resumes mouse handling by reactivating each registered mouse handler.
See also suspendMouse().
If activate is true the screensaver is activated immediately; if activate is false the screensaver is deactivated.
Returns true if the screensaver is active (i.e. the screen is blanked); otherwise returns false.
This function sends the input method event ime to the server.
If there is a window currently in compose mode, the event is sent to that window. Otherwise, the event is sent to the current focus window.
Sends an input method query for the specified property.
You must reimplement the virtual function QWSInputMethod::queryResponse() in a subclass of QWSInputMethod if you want to receive responses to input method queries.
Send a key event. You can use this to send key events generated by "virtual keyboards". unicode is the Unicode value of the key to send, keycode the Qt keycode (e.g. Qt:).
Sets the brush brush to be used as the background in the absence of obscuring windows.
If vis is true, makes the cursor visible; if vis is false, makes the cursor invisible.
See also isCursorVisible().
Set the keyboard driver to k, e.g. if $QWS_KEYBOARD is not defined. The default is platform-dependent.
Set the mouse driver m to use if $QWS_MOUSE_PROTO is not defined. The default is platform-dependent.
Sets the primary keyboard handler to kh.
See also keyboardHandler().
Sets the area of the screen which Qt/Embedded applications will consider to be the maximum area to use for windows to r.
See also QWidget::showMaximized().
Sets the timeout for the screensaver to ms milliseconds. A setting of zero turns off the screensaver.
Sets an array of timeouts for the screensaver to a list of ms milliseconds. A setting of zero turns off the screensaver. The array must be 0-terminated.
Suspends mouse handling by suspending each registered mouse handler.
See also resumeMouse().
Returns the window containing the point pos or 0 if there is no window under the point.
This signal is emitted whenever something happens to a top-level window (e.g. it's created or destroyed). w is the window to which the event of type e has occurred. | http://doc.trolltech.com/4.0/qwsserver.html | crawl-001 | refinedweb | 714 | 62.04 |
You can download the Python code here:
1. FB.init has to be initialized with the Facebook APP ID instead of the API Key, though the apiKey parameter still is used.
2. The oauth: true options. must be set in the FB.init() calls.
In other words, the code changes would be:
FB.init({apiKey: facebook_app_id, oauth: true, cookie: true});3. Instead of response.session, the response should now be response.authResponse. Also,
make note that scope: should be used instead of perms:
FB.login(function(response) { if (response.authResponse) { }, {scope: 'email,publish_stream,manage_pages'} });Also, if you need to retrieve the user id on the JavaScript, the value is stored as response.authResponse.userID instead of response.session.uid:
FB.api( { method: 'fql.query', query: 'SELECT ' + permissions.join() + ' FROM permissions WHERE uid=' + response.authResponse.userID}, function (response) { });
If you see yourself not being able to logout, it means you haven't set the right APP ID or forgot to set oauth: true in both your login and logout code. If you're going to make the change, you should make it everywhere in your code!
On the Python/Django side, you need to implement a few helper routines. If Facebook authenticates properly, a cookie with the prefix fbsr_ will be set as a cookie (instead of fbs_). This signed request includes an encoded signature and payload, which must be separated and verified. You can look at the PHP SDK code to understand how it's implemented, or you can review this Python version of the code (see)
def parse_signed_request(signed_request, secret): encoded_sig, payload = signed_request.split('.', 2) sig = base64_urldecode(encoded_sig) data = json.loads(base64_urldecode(payload)) if data.get('algorithm').upper() != 'HMAC-SHA256': return None else: expected_sig = hmac.new(secret, msg=payload, digestmod=hashlib.sha256).digest() if sig != expected_sig: return None return dataIn the PHP SDK code, there is a base64_url_decode function that automatically adds the correct number of "=" characters to the end of the Base64 encoded string. The basic problem is that Base64 encodes 3 bytes for every 4 characters, so the total length will be 4*len(string)/3. We can use this knowledge to realize that the total length will be a multiple of 4 and then insert the appropriate number of '=' characters to the end of the string. Facebook also appears to use a Base64-uRL variant in which the '+' and '/' characters of standard Base64 are respectively replaced by '-' and '_', which then must be replaced during the decode process (see). The code looks like the following:
def base64_urldecode(data): # # 1. Pad the encoded string with "+". # See data += "=" * (4 - (len(data) % 4) % 4) return base64.urlsafe_b64decode(data)If you're using the old Python SDK implementation, you may wish to implement code that mimics the way in which the Python SDK implemented get_user_from_cookie, since the expires, session_key, and oauth_token can be derived from retrieving the access token. We also set an fbsr_signed parameter in case you have debugging statements in your code and want to differentiate between your old get_user_from_cookie from this code.
Note: in order to make things backward-compatible, you need to make an extra URL request back to Facebook to retrieve the access token. This code was also inspired from the Facebook PHP SDK code too:
def get_access_token_from_code(code, redirect_url=None): """ OAuth2 code to retrieve an application access token. """ data = { 'client_id' : settings.FACEBOOK_APP_ID, 'client_secret' : settings.FACEBOOK_SECRET_KEY, 'code' : code, } if redirect_url: data['redirect_uri'] = redirect_url else: data['redirect_uri'] = '' return get_app_token_helper(data) BASE_LINK = "" def get_app_token_helper(data=None): if not data: data = {} try: token_request = urllib.urlencode(data) app_token = urllib2.urlopen(BASE_LINK + "/oauth/access_token?%s" % token_request).read() except urllib2.HTTPError, e: logging.debug("Exception trying to grab Facebook App token (%s)" % e) return None matches = re.match(r"access_token=(?P
.*)", app_token).groupdict() return matches.get('token')
This looks really useful. Thank you for writing it up!
Roger, can you also sign requests, i.e. for unit tests? Sort of the reverse operation for parse_signed_request(..), i need to create the signature (the part before the dot). I have the app secret and the payload.
I've justed posted some code to demonstrate how to do parse_signed_request() the reverse way. See tests.py in:
This comment has been removed by the author.
This comment has been removed by the author. | http://hustoknow.blogspot.com/2011/08/facebooks-oauth2-support-for-python.html | CC-MAIN-2017-51 | refinedweb | 708 | 59.9 |
Wait just a moment...
Posted Oct 26, 2012 1:56 UTC (Fri) by pabs (subscriber, #43278)
[Link]
Posted Oct 26, 2012 3:59 UTC (Fri) by ncm (subscriber, #165)
[Link]
I wonder if merkaartor is usable. I've found the Java bits of LO supremely dispensable, ymmv.
Posted Oct 28, 2012 23:17 UTC (Sun) by pabs (subscriber, #43278)
[Link]
No thanks.
Posted Oct 26, 2012 4:29 UTC (Fri) by cyanit (guest, #86671)
[Link]
Posted Oct 26, 2012 4:41 UTC (Fri) by imgx64 (guest, #78590)
[Link]
Posted Oct 26, 2012 4:53 UTC (Fri) by Cyberax (✭ supporter ✭, #52523)
[Link]
Besides, Go itself is, well, kinda stupid. Well, not just kinda but very stupid.
Posted Oct 26, 2012 7:07 UTC (Fri) by danieldk (guest, #27876)
[Link]
Posted Oct 26, 2012 11:21 UTC (Fri) by juliank (subscriber, #45896)
[Link]
Posted Oct 26, 2012 16:20 UTC (Fri) by cmccabe (guest, #60281)
[Link]
I feel like it's unfair to blast Go for this. It's a new language; its GC is not going to beat HotSpot on day 1. A lot of the languages Go is often compared to, like Python, have far worse GC (CPython _still_ can't collect datastructures which have cycles, if any of them has a finalizer).
Google is using Go in production (see Vitess for an open source example).. Joel Spolsky wrote a good article about it:. I'm sure you can find many more. The tl;dr version: don't believe the hype. You want exceptions only for signalling fatal errors.
re: domain specific languages. In general, I feel like you've got two approaches to follow when using DSLs. The first one is to use an extremely dynamic language like Lisp, Ruby, etc. and use eval-based tricks. The second is to create something like lexx or yacc-- a preprocessor that converts source in $DSL to C, C++, Golang, etc. Both are perfectly valid approaches. I don't feel like the colleciton of macro tricks and templates often glorified as "a DSL" in C++ requires any comment, other than: please, make it stop.
With regard to generics: if you find yourself using interface{} everywhere, you're doing something wrong. If you program Go like it's Java, you're not going to be happy. Java is best programmed in Java. But if you program Go like Go, you just might find that you have what you need. With that being said, generics might be added to the language at some point.
Posted Oct 26, 2012 19:55 UTC (Fri) by danieldk (guest, #27876)
[Link]
You seem to be referring to the problem where (since the address space on 32-bit machines is small) some values may be identified as valid pointers. But that isn't the only problem with the garbage collector. It performs badly when lots of short-lived small objects are created. Eventually, it'll need a better collector.
> They are looking at improving it, but it will take time.
That's the standard answer. But currently, its GC is weak.
> A lot of the languages Go is often compared to, like Python, have far worse GC (CPython _still_ can't collect datastructures which have cycles, if any of them has a finalizer).
Here, the discussion was about Java and C#.
>.
Every serious chunk of Go code is interwoven with if statements to check for errors. It breaks the logical flow of functions. Besides that, it's not mandatory to bind the return values of a function. So for side-effecting functions, it's easy to accidentally ignore errors. And then there is the whole 'nil may equal be nil' issue:
> You want exceptions only for signalling fatal errors.
I want errors as values, *without* having to check the result of every function call. This has long been solved in option types that are also monads, see e.g. Maybe and Either in Haskell. Java checked exceptions are not too bad either: they don't clutter the flow, but you cannot silently ignore them (you have to catch them or indicate that your method might throw that particular exception).
Go doesn't differ much from C wrt. to error handling. Except that in C I know that -1 is always -1 or 0 is always 0.
> Both are perfectly valid approaches. I don't feel like the colleciton of macro tricks and templates often glorified as "a DSL" in C++ requires any comment, other than: please, make it stop.
Right, I was referring to the latter (EDSLs). So, what facilities does Go provide to support the latter? I haven't seen any real EDSL in Go. DSLs have been done in static languages before, without the horror of C++. E.g. Hamlet is a EDSL in Haskell for producing HTML in a type-safe manner:
Hakyll is an EDSL for constructing static site generators:
> With regard to generics: if you find yourself using interface{} everywhere, you're doing something wrong.
Sorry, I was being sarcastic there. It's the standard answer you will get on various Go fora: Go doesn't need generics, because there is interface{}. Anyway, the criticism still stands: Go doesn't have parametric polymorphism, so you either:
- If you are lucky, use non-empty interfaces.
- End up duplicating copying the same algorithm multiple times for different types.
- Write functions that take the empty interface and return the empty interface.
E.g. consider writing the Haskell function:
replicate :: Int -> a -> [a]
(Repeat a value n times.)
A Go equivalent is:
func replicate(n int, val interface{}) []interface{}
Not only does this completely throw away type safety, it's also an example of Go's tediousness. If you call, say replicate(20, 10), you cannot directly cast the result to an []int, because []interface{} is obviously something different.
By all means, don't take my word for it. I encourage people to read Mark Summerfield's Programming in Go. If you use a modern static language such as C#, Haskell, or whatever on a daily basis, you'll be thinking "heh, I could do this in X more elegantly in a far smaller number of lines" every half page or so.
The only great feature of Go is gofmt.
Posted Oct 26, 2012 21:55 UTC (Fri) by cmccabe (guest, #60281)
[Link]
> Go doesn't differ much from C wrt. to error handling. Except that in
> C I know that -1 is always -1 or 0 is always 0.
This is really unfair. Numeric types are strongly typed in Go.
Also, in C, 0 may not always be 0. Technically pointer NULL is a logical representation, not a physical one, so casting a NULL value for a pointer to an int may result in something other than 0. As someone invoking the name of Haskell, you should know this :)
No real-world C compilers that I know make use of this liberty. As usually, C++ "outdoes" C in the amount of implementation-dependent crud. Specifically, NULL values for pointer to member functions are usually not represented by the bit pattern 0. Check this out for some nifty implementation dependent spew:
#include <stdio.h>
class Foo {
int i;
};
static union {
int Foo::*i;
void *v;
} u;
int main(void) {
u.i = 0;
printf("iPtr = %p\n", u.v);
return 0;
}
I agree that the way in which interface values are compared to nil in Go is a little weird. Since the interface types are represented by a tuple internally, it would have been nicer to force developers to write out that tuple when comparing an interface to a constant.
We are going to have to agree to disagree about error handling. I could point out a lot of other notable programmers who prefer error codes to exceptions, not just Joel Sposky. From the Google C++ coding standard, to Raymond Chen, to Linus Torvalds. But what do those guys know anyway? Java checked exceptions are the way and the light (hint: they're not.)
Comparing Go and Haskell is comparing apples and oranges. Go isn't intended to be a functional programming language. We already have a ton of those already (OCaml, Scala, Clojure, etc.) Maybe for algorithms work it's good, but for systems-level work I think those are all very uninteresting.
Posted Oct 26, 2012 23:04 UTC (Fri) by Cyberax (✭ supporter ✭, #52523)
[Link]
Basically, it's Java or GTFO. At most you can make do with Mono. There are no other viable choices.
Posted Oct 26, 2012 23:54 UTC (Fri) by ibukanov (subscriber, #3942)
[Link]
I have found that in practice the moment one wants to add more context to errors the good old return flags to indicate unexpected return status results in least cluttered code.
For example, in Java stack traces in log files by themselves can be rather useless for problem diagnostics. To facilitate the analyzes it is often necessary to log state information from quite a few call sites. But in Java that results is rather ugly
try {
code();
} catch (RuntimeException ex) {
log_extra_error_state();
throw ex;
}
With return error codes the extra clutter is minimal - one just needs to add log_extra_error_state(); on the error path.
Now, I presume in Haskell such annotated error reports should be rather simple to implement using Either. But even in Haskell checking for errors after return is inevitable if one wants to add the context information only on error paths.
Posted Oct 27, 2012 6:42 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
[Link]
In Java you can _rethrow_ exceptions, without losing the cause. I.e.:
> catch(SomeException ex)
> {
> throw new MyException(log_state(), ex); //We DO NOT lose the cause
> }]
Posted Nov 9, 2012 2:34 UTC (Fri) by HelloWorld (guest, #56129)
[Link]
Joel doesn't have a clue, and that article is a perfect example of that. Exceptions don't create any new exit points for a function; if you handle all possible errors in the way he suggests, you'll end up with just as many possible exit points. And also his other point about exceptions being invisible in the source code is a complete red herring, because you know what's invisible and hard to find with code inspection too? Missing error code checks.
And there's even more nonsense. He's right in that you can't write result = f(g(x)) when g might fail. But being able to return multiple values (which Haskell and ML don't allow, returning tuples is something different) doesn't do anything at all to solve that problem. So please, stop spreading that link, it's just stupid.
result = f(g(x))
g
With regard to generics: if you find yourself using interface{} everywhere, you're doing something wrong.
Yes, and I know what: you're using Go.
With that being said, generics might be added to the language at some point.
Posted Nov 9, 2012 8:00 UTC (Fri) by paulj (subscriber, #341)
[Link]
This certainly raises the bar to fully understanding the behaviour of exception using code.
Posted Nov 9, 2012 9:33 UTC (Fri) by Cyberax (✭ supporter ✭, #52523)
[Link]
I.e. if you open a resource then you should do it in a try-finally block (or its analog). In C++ you should wrap in automatic object.
Error codes, on the other hand, promote sloppy design. Certain error codes are almost always ignored, like printf() return result (do you know that it can be used to process SIGINT gracefully?) or close().
Posted Nov 9, 2012 9:47 UTC (Fri) by paulj (subscriber, #341)
[Link]
That coping with this requires structuring code in certain ways when using exceptions is true, but a different point, and it is still non-trivial at times because of the prior point. Just have a look at discussions on exceptions in C++ in constructors and destructors.
Note that I was not advocating for error return codes. That's a strawman argument you're incorrectly attributing to me.
Posted Nov 9, 2012 10:22 UTC (Fri) by hummassa (subscriber, #307)
[Link]
> Sorry, but it's a fact that exceptions introduce implicit control-flow branch points at each statement, potentially even multiple such.
THAT IS NOT A FACT AT ALL.
Those implicit control-flow branch points WILL HAVE TO BE DEALT WITH ANYWAY, they were already there. Only without exceptions you have to deal with them all at all positions in your code. Explicitly. You are writing code that is scaffolding code, not code that does what the program actually needs to do.
When you use exception, you write only the code that does what the program actually needs to do, with some sanity checks here and there. You follow the sanity rules, you are safe.
For instance, every time you open a file things can go wrong. Supposing you have a function call hierarchy main -> f -> g -> h, and h() wants to open a file, one of the following will happen:
1. you (generic "you", ok?) won't check open() results, will proceed as if the file was open, and undefined and terrible things will happen;
2. (try#2) you will check open() results, return an error code, g() does not check for the error code and proceeds, hilarity ensues;
3. (yes it finally works) g() checks h() error code, f() checks g() error code, main() checks f() error code, prints a suitable message.
4. (despair happenning) you have to call i() from g() and j() from f() and both have to open files. At this point, if you don't forget anything (you have to do every of these checks yourself, manually) you have a program that is at least 25% scaffolding.
With exceptions, one of those will:
1. open() throws, _main() aborts and prints "unexpected exception". The program already works.
2. (let's make this pretty) you put a catch() block on main(), print a pretty message cerr << "file " << name << " could not be open". The program still works.
3. (down the road) you have to add i() and j() and the program still works. No need to repeat step 2.
> Note that I was not advocating for error return codes.
No, but you are differentiating "implicit control-branch points" from "explicit" ones and it does give that impression. My point is that: implicit control-branch points are there anyway (you open a file, things can go wrong in the process), and dealing with them implicitly is better then explicitly.
Yes, dealing implicitly with exceptions means that sometimes you have to be exceptionally (ha!) safer (constructors/destuctors). But it just helps you to clarify your code and tell better when things can go wrong.
Posted Nov 9, 2012 10:29 UTC (Fri) by paulj (subscriber, #341)
[Link]
There are other ways to deal with errors, besides exceptions or error return codes. You can design the underlying, called-into code to explicitly follow a state machine, and have idempotent error state. Calling code then do a number of operations on such objects, and check at the end whether they succeeded or not, and if not, query the reason for the error.
Posted Nov 9, 2012 16:35 UTC (Fri) by HelloWorld (guest, #56129)
[Link]
Posted Nov 9, 2012 23:40 UTC (Fri) by nix (subscriber, #2304)
[Link]
Posted Nov 10, 2012 3:26 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Nov 10, 2012 10:21 UTC (Sat) by paulj (subscriber, #341)
[Link]
Posted Nov 10, 2012 16:10 UTC (Sat) by nix (subscriber, #2304)
[Link]
That memory allocation is not a shared resource except inasmuch as the heap is shared -- in which case you have to consider almost *everything* a use of a shared resource -- but you still need to consider the semantics of exceptions in order to figure out how to avoid a memory leak. And this soon gets thoroughly unobvious.)
Posted Nov 10, 2012 21:48 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
[Link]
#include <stdio.h>
#include <memory>
#include <stdexcept>
class tracer
{
public:
tracer() { printf("Constructing\n"); }
~tracer() { printf("Dying\n"); }
};
class tester
{
std::auto_ptr<tracer> t1_;
std::auto_ptr<tracer> t2_;
public:
tester()
{
t1_=std::auto_ptr<tracer>(new tracer());
throw std::runtime_error("Test");
t2_=std::auto_ptr<tracer>(new tracer());
}
};
int main()
{
try
{
tester t;
} catch(const std::runtime_error &ex)
{
//Print exception info
}
return 0;
}
cyberax@cybmac:~/work$ ./a.out
Constructing
Dying
Posted Nov 10, 2012 23:06 UTC (Sat) by nix (subscriber, #2304)
[Link]
Consider just these three lines of your example:
t1_=std::auto_ptr<tracer>(new tracer());
throw std::runtime_error("Test");
t2_=std::auto_ptr<tracer>(new tracer());
Let's replace the stupid throw with something more likely to be actually seen in practice, say:
t1_=std::auto_ptr<tracer>(new tracer());
ptr=std::auto_ptr<blah>(new raiied_blah());
t2_=std::auto_ptr<tracer>(new tracer());
Now... if any of those constructors but the first throws, what do you do about it? If, for whatever reason, you cannot use auto_ptr (which is, to be honest, simply pushing off the RAII memory-allocation overhead to something else that has already done all the work) how do you prevent the earlier-allocated ones from leaking? That's right, you have to free at appropriate times in a catch handler. Now figure out which bits can throw. It's hard. It's very, very hard. I recommend Scott Meyers's Effective C++ books for more than you can possibly stomach on this (I wish I could reread mine, but I loaned them out and they never came back, I'm sure you know how it is).
Posted Nov 10, 2012 23:22 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
[Link]
>If, for whatever reason, you cannot use auto_ptr (which is, to be honest, simply pushing off the RAII memory-allocation overhead to something else that has already done all the work) how do you prevent the earlier-allocated ones from leaking?
By using auto_ptr or any other fitting smart pointer, there's literally no measurable overhead in using it. And there's no excuse in not doing it - and that was clear from the very start. That's actually a rationale for not including the "finally" keyword in C++ - because it might detract from using RAII.
Of course, if you really don't want to do it then you'll have to do the try..catch dance. Exactly like in C where ALMOST EVERY function might result in error and the most common pattern of error handling is "goto err_exit". That's actually why glibc handles OOM by calling abort() - it's simply too complicated to make everything in C error-safe.
PS: I had actually written a simple model verifier to verify my own collection library for upholding exception guarantees. That means that I've obviously read Sutter, Meyers, Stroustroup, Alexandrescu and quite a lot of other C++ writers.
Posted Nov 10, 2012 23:25 UTC (Sat) by foom (subscriber, #14868)
[Link]
> If, for whatever reason, you cannot use auto_ptr ( which is, to be honest, simply pushing off the RAII memory-allocation overhead to something else that has already done all the work
That's the whole point! You *always* just use auto_ptr (or another smart pointer, e.g. unique_ptr) to take care of it for you! That's exactly why RAII is so nice, and has been best practice in C++ for years!
Posted Nov 11, 2012 9:32 UTC (Sun) by paulj (subscriber, #341)
[Link]
Posted Nov 11, 2012 10:37 UTC (Sun) by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Nov 11, 2012 12:53 UTC (Sun) by paulj (subscriber, #341)
[Link]
Could good C++ programmers have created their own smart pointers in the 80s? I guess so. My strong impression though is that C++ programmers would have been *way* ahead of the state of understood wisdom in the C++ world at that time. My impression
To claim smart pointers are trivial, widely understood techniques since the 80s stretches credibility somewhat. If they're so trivial, so widely understood for so long, why did it take 20+ years to standardise some? Why is an only relatively recently standardised form of smart pointer already being deprecated in the next C++ standard? (auto_ptr for unique_ptr). My sense is that smart pointers are so trivial that even C++ standards committee members hadn't really fully grokked the implications of copying, moving and sharing them until relatively recently - but I guess they're just "old C programmers who don't know better".
Unfortunately, in the real world, many programmers just aren't quite as brilliant as you.
Posted Nov 11, 2012 13:04 UTC (Sun) by Cyberax (✭ supporter ✭, #52523)
[Link]
> Could good C++ programmers have created their own smart pointers in the 80s? I guess so. My strong impression though is that C++ programmers would have been *way* ahead of the state of understood wisdom in the C++ world at that time. My impression
Well, you'd be wrong. The need for RAII-d resources was understood quite early in the game. I'm not sure about 80-s - there was no significant amount of C++ code then, but surely during the early 90-s.
>If they're so trivial, so widely understood for so long, why did it take 20+ years to standardise some? Why is an only relatively recently standardised form of smart pointer already being deprecated in the next C++ standard? (auto_ptr for unique_ptr).
Mostly for historical reasons. The first C++ Standard was developed in great haste (by ISO standards), there simply was not time to test the proposed things in production. And then it was too late so major additions had to wait until C++11. The same story happened with hash maps, btw.
However, Boost and other libraries had no shortage of various smart pointers.
Posted Nov 11, 2012 14:56 UTC (Sun) by paulj (subscriber, #341)
[Link]
When was Boost created? late 90s. Before Boost there was the HP and SGI STL, which I gather inspired the standardised STL (and Boost?), and iostream. That work dates from the early 90s AFAICT. The SGI STL page doesn't seem to have any kind of smart pointer, dated 1994. C++98 has the auto_ptr, but it apparently is flawed. The early Boost pages reference this:... which references books discussing smart pointer programming *patterns* from '92, '94.
I know this stuff is all trivial to you, but it does seem to me that it took about 10 to 15 years (80s to mid/late 90s) for leading C++ programmers to agree on patterns for dealing with exception safety, such as RAII, then getting some kind of smart pointer support in the language (standard or generally available libraries) in order to cope with remaining pitfalls in C++ RAII+exceptions. That smart pointer support then needed further refinement in the mid/late 90s and early 2000s to get to where we are today.
That doesn't quite suggest to me that this stuff is quite as trivial as you suggest to is. Finally, even with smart pointers, I still don't find exception handling code that uses them to be non-trivial! However, you do seem be a superior breed of programmer to the rest of us, so perhaps that's the reason for the discrepancy in views ;). (Intended as a gentle jibe, I hope you'll accept it in that spirit).
Posted Nov 12, 2012 2:05 UTC (Mon) by Cyberax (✭ supporter ✭, #52523)
[Link]
>When was Boost created? late 90s. Before Boost there was the HP and SGI STL, which I gather inspired the standardised STL (and Boost?), and iostream.
The STL specification was written before there was a single more-or-less complete STL implementation (and it shows). That was also a problem with lots of C++ features (like exception specification or template export).
Exceptions appeared in GCC 2.7 in 1997, I think. Before that there was little reason to use them and to create best practices for them. Though the usefulness of RAII was understood earlier.
In Java exceptions with try..finally also appeared at about the same time. But without RAII.
Posted Nov 12, 2012 14:35 UTC (Mon) by paulj (subscriber, #341)
[Link]
As for exceptions, they were developed in the late 80s, standardised in the early 90s, and there were commercial compilers supporting exceptions since at least '92 apparently. See:. The release history of GCC possibly isn't that relevant, a number of proprietary implementations of C++ had more significant usage than g++ back then iirc.
Anyway..
Posted Nov 12, 2012 14:38 UTC (Mon) by paulj (subscriber, #341)
[Link]
Posted Nov 9, 2012 23:38 UTC (Fri) by nix (subscriber, #2304)
[Link]
Posted Nov 10, 2012 3:25 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
[Link]
The main problem was that exceptions were added quite lately in the game, and they were not reliable and/or generally available until at least early 2000-s. So we're stuck with huge amount of legacy non-exception-safe code.
But that's a poor excuse for creating a new language without exception support.
Posted Nov 10, 2012 16:06 UTC (Sat) by nix (subscriber, #2304)
[Link]
Perhaps I was hallucinating and e.g. Herb Sutter's writings on the subject do not in fact exist, and it was 'clear from the start'. That would explain it, but I don't think that's so.
Posted Nov 10, 2012 21:49 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
[Link]
A simple question - how do you deal with errors in the ubiquitous "goto error_exit" pattern in C?]
Posted Nov 9, 2012 8:08 UTC (Fri) by ekj (guest, #1524)
[Link]
Here's what he said:
"I think the reason programmers in C/C++/Java style languages have been attracted to exceptions is simply because the syntax does not have a concise way to call a function that returns multiple values"
Notice that's he's talking of the *syntax*, and functionally the tuple-unpacking that some languages allow do come close to being equivalent to being able to return multiple values.
handle, errcode = open('blah.dat')
This is prettier than the all-too-common functions that do things like "this function will return the count of Widgets, or -1 if something went wrong", also it's less error-prone because with the above you can't accidentally treat an error_code as valid data. (and proceed as if there where really -1 Widgets, say)
Posted Nov 9, 2012 22:25 UTC (Fri) by HelloWorld (guest, #56129)
[Link]
Posted Nov 9, 2012 9:28 UTC (Fri) by renox (subscriber, #23785)
[Link]
Also why do you consider returning a tuple different from returning multiple values?
The only thing that matter is allowing the programmer to have a readable way to access multiple return values ( y,z = f(x) ), whether these return values are in a tuple or not is not so important, thought I would argue that the tuple is better as it also allow f(g(x)) so it's better (provided it still allow the simple access to the multiples values).
Posted Nov 9, 2012 15:49 UTC (Fri) by HelloWorld (guest, #56129)
[Link]
Your analysis of Joel's article seems quite shallow,
Also why do you consider returning a tuple different from returning multiple values?
dup x = (x,x)
mad a b c = a*b+c
foo x = mad x x x
/mad { mul add } def
/foo { dup dup mad } def
Posted Nov 9, 2012 16:14 UTC (Fri) by apoelstra (subscriber, #75205)
[Link]
Posted Nov 9, 2012 21:33 UTC (Fri) by HelloWorld (guest, #56129)
[Link]
function dup(x) return x, x end
function add(x,y) return x, y end
print(add(dup(3))) -- prints 6
function dup(x) return x, x end
function mad(x,y,z) return x*y+z end
print(mad(dup(dup(3))))
Posted Nov 10, 2012 0:22 UTC (Sat) by hummassa (subscriber, #307)
[Link]
sub dup { $_[0], @_ }
sub mad { $_[0]*$_[1]+$_[2] }
say mad dup dup 3
Posted Nov 10, 2012 14:09 UTC (Sat) by HelloWorld (guest, #56129)
[Link]
Posted Nov 11, 2012 0:44 UTC (Sun) by hummassa (subscriber, #307)
[Link]
sub mad { my($a, $b, $c, @rest) = @_; $a*$b+$c, @rest }
So you can use @_ as the forth/postcript stack...
Posted Nov 9, 2012 17:50 UTC (Fri) by mathstuf (subscriber, #69389)
[Link]
compose :: (Num t) => t -> t
compose = uncurry (uncurry ((uncurry .) . (const . mad))) . dup . dup
Basically, the `const' adds a fourth (ignored) parameter to `mad' and then we uncurry that function to get it to take a tuple of tuples and then `dup . dup' the input to make `((x, x), (x, x))'. Granted, this looks nasty, but is also something that lamdabot might have been able to generate when asking for how to rewrite the obvious implementation in a point-free way.
Posted Oct 26, 2012 16:48 UTC (Fri) by cmccabe (guest, #60281)
[Link]
Go has excellent reflection. See the reflect package for details. This makes sense because one of the criticisms the Go authors had of C++ was the lack of good reflection support.
Its dynamic code loading support is not poor, but nonexistent. Go doesn't support loading modules at runtime. In general, loading modules into a giant monolithic program is often a poor design. It's bad for robustness and often bad for security too. It's usually better to structure your design in terms of communicating processes. Go makes that easy by supporting Go globs, protocol buffers, JSON, etc. etc.
I use Java every day at work. One of the big hassles is setting up your CLASSPATH and making sure you have all the jars you need, but not the ones you don't. It's not just possible but common for different versions of the same jar to fight with one another when you have multiple projects dumping jars into the dumping ground-- er, sorry, runtime classpath. It's funny that Java, often held up as the pinnacle of static-ness, has one of the most dynamic loading systems out there.
This extremely dynamic loading system was also implicated in some of the recent Java security vulnerabilities. Basically some pretty cool guy, eh, managed to get access to something very much like eval().
Posted Oct 26, 2012 19:59 UTC (Fri) by danieldk (guest, #27876)
[Link]
It's ironic to speak of security, when you do static linking *all the time*.
Posted Oct 27, 2012 21:11 UTC (Sat) by cmccabe (guest, #60281)
[Link]
It's normal for enterprise environments to be a few versions of Java behind. Some of the build tools, like Maven, encourage this anti-pattern by allowing people to specify library dependencies on specific versions. In fact, I think it issues a warning message in some cases if you fail to specify an explicit version dependency.
The underlying problem is that there's no concept of a version number API in Java. In C there are library name conventions that allow you to ask for the newest version of the library that implements a given API. But Java never developed those conventions, so your only safe course of action is to peg to a specific version.
By the way, JDK7 is still considered new and untested by most enterprise Java users. I don't even have it installed yet because what would be the point? It's still experimental in the context of Hadoop.
Another thing to note is that there are going to be a lot fewer security issues in managed code than in C/C++ libraries. I enjoy programming in C and C++, but let's not kid ourselves. I think Go's model is fine. If there's a big security flaw discovered in some Go library, it will trigger a lot of binaries to be re-downloaded from Linux distros. But I don't expect that to happen that often.
Posted Oct 28, 2012 16:29 UTC (Sun) by nix (subscriber, #2304)
[Link]
The irony is that the original implementer of this feature (before GNU souped it up by adding default symbol versions) was... Sun, and it has stood them in very good stead over the years. I have no idea why on earth the Java people didn't learn from the people across the hall in this area.
Posted Oct 27, 2012 12:36 UTC (Sat) by epa (subscriber, #39769)
[Link]
Posted Oct 27, 2012 21:54 UTC (Sat) by cmccabe (guest, #60281)
[Link]
On a more serious note: no, it does take 20 years for a programming language to become mature. Go already has a good collection of libraries and is adding more all the time. It will take more time to get a good, precise, compacting GC that can go toe-to-toe with HotSpot. But not 20 years.
Posted Oct 26, 2012 16:03 UTC (Fri) by shmerl (guest, #65921)
[Link]
Posted Oct 26, 2012 19:08 UTC (Fri) by sorpigal (subscriber, #36106)
[Link]
Except that it's not especially mainstream, not strictly strongly typed and not entirely finished. Minor details, surely.
Since you asked
Posted Oct 26, 2012 21:06 UTC (Fri) by ncm (subscriber, #165)
[Link]
C++, for the present.
Not garbage-collected, you say? If what you want from a garbage-collected language is no "delete" statements, you already have that. It's entirely possible -- indeed, recommended -- to write large systems in C++ with no delete statements at all. The standard library provides all you need, starting with smart pointers. (Technically, smart-pointer implementations have delete statements, but you never need to see them.) Other libraries provide database handles, transactions, windows, and what-have-you.
If you know what you're about, you don't really want garbage-collection anyway, because it interacts badly with destructors. Without destructors, exceptions lose most of their value. The whole point of exceptions is so you don't have to hand-write exception-handling code everywhere. Besides being tedious, scattered exception-handling code, whether in catch blocks or under "if (rc)", is poorly exercised, so often buggy. It's better to collect up exceptions at subsystem boundaries with all the local cleanup already done. Destructors do that cleanup. They get exercised all the time, so you can trust them. Most of your what your destructors do, such as rolling back transactions, closing files, and freeing memory, is generated by the compiler from library code, so you never see it, and it's right every time. Memory management comes for free.
The Glorious Successor to C++, whatever it turns out to be, will also not need garbage collection.
Posted Oct 26, 2012 21:26 UTC (Fri) by Jandar (subscriber, #85683)
[Link]
Posted Oct 26, 2012 23:01 UTC (Fri) by ncm (subscriber, #165)
[Link]
The world needs a systems-programming language as powerful as C++, but without the cruft. (Cruft being the C type system, and failed experiments like exception checking and virtual functions.) It is unfortunate that current efforts at language design are almost always contaminated with CONSes. A sign of the strength of a language design is that library components do what had to be built into weaker languages. Virtual function support should be a library facility in the Glorious Successor to C++.
Posted Oct 27, 2012 6:43 UTC (Sat) by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Oct 28, 2012 16:32 UTC (Sun) by nix (subscriber, #2304)
[Link]
Posted Nov 1, 2012 16:43 UTC (Thu) by jtc (subscriber, #6246)
[Link]
Well, if you eliminate Java and C#, about all that's left is Eiffel.
Posted Nov 1, 2012 19:10 UTC (Thu) by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Nov 1, 2012 19:43 UTC (Thu) by bronson (subscriber, #4806)
[Link]
Eiffel ain't mainstream.
Posted Oct 26, 2012 4:42 UTC (Fri) by eru (subscriber, #2753)
[Link]
So if the free software community tried to ignore Java, it would cut itself out of substantial and important application areas. Consider also the world domination strategy: It is way easier to port Java-based applications to run on Linux, than to do so for applications based on C# or other MS framework-du-jour. Java was at least designed with portability in mind, even if that aspect is not as perfect as advertised.
No thanks to trolls.
Posted Oct 26, 2012 5:30 UTC (Fri) by jensend (guest, #1385)
[Link]
There's more software under Free Software compatible licenses in Java than in any other languages except C and C++. Such software, along with Java software under all kinds of other licenses, is used on the majority of Linux systems.
Just because you don't like to use it doesn't mean it has no relevance. Nor does it mean you can write off everyone who uses it (who greatly outnumber you and those with similar hostile attitudes) as corporate drones. You have no right to demand that subjects relevant to them not receive coverage from LWN. You most definitely have no right to dictate how developers spend their time and efforts.
Posted Oct 26, 2012 6:28 UTC (Fri) by alankila (subscriber, #47141)
[Link]
Posted Oct 26, 2012 7:05 UTC (Fri) by Cyberax (✭ supporter ✭, #52523)
[Link]
However, it's not cross-platform (yeah, I know about Mono).
Posted Oct 26, 2012 8:22 UTC (Fri) by khim (subscriber, #9252)
[Link]
Posted Oct 26, 2012 12:22 UTC (Fri) by alankila (subscriber, #47141)
[Link]
Posted Oct 26, 2012 16:36 UTC (Fri) by cortana (subscriber, #24596)
[Link]
Posted Oct 26, 2012 17:20 UTC (Fri) by khim (subscriber, #9252)
[Link]
Bastion uses Mono because it was not developed for Windows. Chrome (NaCl) and Mac ports were planned relatively early thus it made sense to use Mono.
If you use recent versions of C# then it's basically impossible to use Mono.
Posted Nov 9, 2012 3:14 UTC (Fri) by HelloWorld (guest, #56129)
[Link]
Posted Nov 9, 2012 4:35 UTC (Fri) by bronson (subscriber, #4806)
[Link]
Posted Nov 9, 2012 15:50 UTC (Fri) by HelloWorld (guest, #56129)
[Link]
Posted Nov 9, 2012 18:08 UTC (Fri) by bronson (subscriber, #4806)
[Link]
All this has been beaten to death elsewhere so I'll leave it here.
Posted Oct 26, 2012 20:25 UTC (Fri) by drag (subscriber, #31333)
[Link] | http://lwn.net/Articles/521415/ | CC-MAIN-2013-20 | refinedweb | 6,385 | 69.41 |
SIGNBIT(3) BSD Programmer's Manual SIGNBIT(3)
signbit - test sign
libc
#include <math.h> int signbit(real-floating x);
The signbit() macro determines whether the sign of its argument value x is negative. An argument represented in a format wider than its semantic type is converted to its semantic type first. The determination is then based on the type of the argument.
The sign is determined for all values, including infinities, zeroes, and NaNs
The sign is determined for finites, true zeros, and dirty zeroes; for ROPs the sign is reported negative.
The signbit() macro returns a non-zero value if the sign of its value x is negative. Otherwise 0 is returned.
No errors are defined.
fpclassify(3), isfinite(3), isnormal(3), math(3)
The signbit(). | http://mirbsd.mirsolutions.de/htman/sparc/man3/signbit.htm | crawl-003 | refinedweb | 128 | 58.28 |
12 April 2007 12:37 [Source: ICIS news]
SHANGHAI (ICIS news)--Sinopec cut linear low density (LLDPE) prices by 1.2% in north China due to inventory pressure and weak demand, traders, distributors and buyers said on Thursday.?xml:namespace>
The Chinese petrochemical major reduced the LLDPE price by yuan (CNY) 150/tonne ($19.4/tonne) to CNY 11,950-12,050/tonne, they added.
“We decreased the ex-works price due to fears of inventory accumulation during Labour Holiday week [in early May] and that demand for mulch film would weaken from the Chingming Festival [in early April],” a sales manager from Sinopec's north China sales branch said.
Despite the cuts, LLDPE trades in north ?xml:namespace>
“We lost some orders from converters after local producers’ price drop this morning. We don’t know how to offer and how much to offer,” one trader said.
Only a few deals were done at CNY 12,050-12,150/tonne ex-warehouse (EXWH) for LLDPE in north
Some converters say they would increase their mulch film production in mid-to-late April. “We have cut our mulch film operating rate to 40% from 70-80% currently due to reducing orders,” one film maker said.
($1 = CNY | http://www.icis.com/Articles/2007/04/12/9019831/sinopec-north-china-cuts-lldpe-price-by-1.2.html | CC-MAIN-2014-52 | refinedweb | 206 | 69.92 |
Based on lots of comments and other stuff on the “Internets”, I have started using Django-Webtest. Webtest inherits from Django TestCase. It allows you to interact with forms more like the user does. For example you can select an option from an HTML Select tag. I am still trying to decide if this extra functionality is worth having to deal with another layer of documentation. Especially since that layer is sparse.
Anyway, these are my notes for getting things done with Webtest. This post is not meant to be a tutorial or replace the docs. Rather its just stuff I have found useful and hard to find in the docs.
Viewing the Page in a Browser
Let’s say you are testing a form and things are not going as planned. Sometimes the easiest way to solve the problem is to view the page. Here is how:
form = self.app.get(reverse('login')).form form['email'] = user.email form['password'] = self.test_password response = form.submit().follow() response.showbrowser()
The last line is where the magic is.
Finding Stuff on the Page
Webtest provides the Beautiful Soup parsed HTML in the .html attribute of a response. One way this is useful is to assert things about the content of a page. For example:
response = self.app.get(reverse('my_url', args=[obj.id]), user='jim') delete_button = response.html.find('button', id='delete_button') self.assertIsNotNone(delete_button)
Checking the URL After Redirect
Sure, you can use status code. But what if you want to know where you were redirected? Use this Django assertion:
response = form.submit() self.assertRedirects(response, reverse('my_named_url')) response = self.app.get(response.url, user='test') # do the redirect
Occasionally this does not work. For some reason, the port gets inserted in the response.url, while its not in reverse(). I have seen this problem occur with some assertRedirects and not others in the same test method. Here is one way to get around this:
from urlparse import urlparse def my_assertRedirects(self, response, expected_url): parts = urlparse(response.url) self.assertEqual(parts.path, expected_url)
Handling MultipleChoiceField
I am using django-crispy forms. When I make an InlineCheckBoxes of a Django MultipleChoiceField, I end up with several HTML checkbox tags with the same name. Hence the usual method for assigning a value does not work. Here is something that does work:
form.set('my_checkbox_name', True, index=1)
where the index keyword specifies which checkbox with name “my_checkbox_name”.
Handling Form Errors
One way to handle form errors is to look at the response_status. For most of my forms, a status of 200 is returned when there is an error. While a 302 (redirect) is returned when there are no errors. Although this works, it feels like a pretty blunt instrument for handling form errors.
One interesting fact about WebTest is it inherits from Django TestCase. What his means is that some of the functionality of TestCase still works. For example, you can submit a form using the post method of the Django Client class.
For error handling, you can still use:
response = form.submit('save') self.assertFormError(response, 'form', 'user_type', u'This field is required.')
or for non field errors:
response = response.form.submit('save') self.assertFormError(response, 'form', None, ['an error message', 'another error message'])
Note that the word form is a string. It is the name of the form in the context dict that you send to render. Here are the docs on assertFormError.
What if you are getting a form error, but you are not sure what error it is? The place to start looking is in the response.context list.
Form Submit
Lets say you have a form that redirects after it is successfully submitted. If you submit it like this:
response = form.submit('save') print response.status >>>302 FOUND response.showbrowser()
In the browser, you will see a blank page. What is going on here is Webtest did not automatically go to the page you are redirecting to. To have it go to that page, do this:
response = form.submit('save').follow()
ModelForm Initial Values
This one is baffling. If you pass initial values to a ModelForm using the initial keyword, they do not appear in the form! How do I know this? If I run the code outside of testing, they appear. If I run the code in testing and do showbrowser() they are not there. If I set break points during testing, I can see that the “initial” keyword has the correct values.
HTTP Forbidden (403)
With the systems I design, it is often necessary to limit access to views based on attributes of the user. Thus a big part of testing is making sure users cannot view things they are not allowed to view.
You would think this would be a simple test:
response = self.app.get(reverse('my_view')) self.assertEqual(response.status_code, 403)
But if you do that you get an error:
AppError: Bad response: 403 FORBIDDEN
Instead try this:
response = self.app.get(reverse('my_view'), status=403)
If the response status is not 403, an error will be raised, which is exactly what you want.
Clicking a Link
If there is a link on your page (<a></a>) the webtest response object has a method called “click” which lets you click on the link. Seems easy enough. You can send regular expression patterns or strings to select the link you need.
The gottcha is when the uses “onclick” as its action. When the method is searching for the link, it also checks to see what the tags href attribute is. If there is no href attribute, then the link is skipped even if it matches the linkid.
In a way this makes sense because the click method ends with a call to the goto method. In any case, WebTest does not run javascript. Guess I need to use selenium for this.
Use the Verbose Keyword
Several methods accept the keyword “verbose”. Use it to speed up debugging.
Sessions
The good news is you can inspect session variables by looking at:
self.app.session
The bad news is I cannot figure out how to set a session variable inside a test. It seems that writing to self.app.session does not work. WebTest uses a special backend to handle auth. It seems that is not connected to the normal session processing. If you know how to do this, let me know!
Ugh… this worse than I thought. For some mysterious reason, the session vars are not carrying over between views. I am solving this by switching to Django TestCase. | https://snakeycode.wordpress.com/tag/django/ | CC-MAIN-2019-30 | refinedweb | 1,095 | 68.47 |
Toshio Kuratomi (a badger gmail com) said: > Does this look like the right generic structure for doing this: > > %if 0%{?fedora} >= X || 0%{?rhel} >= Y > # Do things from X until Current Fedora, Y until Current RHEL > # Y should be the latest released version if we don't know that > # it's not going to change in the next version > %else > %if %{fedora} && 0%{fedora} < X && ! %{rhel} > # Do things for Fedora < X > %else > %if ! %{fedora} && %{rhel} && 0%{rhel} < Y > # Do things for RHEL < Y > %else > # Do things for any other distros > %endif > %endif > %endif > > If so, then we'd want to prettify and codify it a bit so we can put it > in the Packaging Guidelines Looks reasonable. Bill | https://www.redhat.com/archives/fedora-devel-list/2009-June/msg00440.html | CC-MAIN-2014-15 | refinedweb | 118 | 65.86 |
You all might remember the line-following robot I built using Arduino. Well today, we are going to do the same thing, but with a Raspberry Pi.
Special shout out to Matt Timmons-Brown for this project idea. He is the author of a really good book on Raspberry Pi robotics: (Learn Robotics with Raspberry Pi). Go check it out!
Video
Here is a video of what we will build in this tutorial.
Requirements
Here are the requirements:
- Build a line-following robot using Raspberry Pi.
You Will Need
The following components are used in this project. You will need:
- Wheeled Robot with Raspberry Pi
- Black Electrical Tape 3/4 Inch
- White Poster Board 22 inch by 28 inch
- Two 5V Infrared Line Track Follower Sensor TCRT5000 (available from eBay)
- These sensors have an infrared (IR) receiver and transmitter.
- Black things (e.g. black electrical tape) absorb IR light; white things (e.g. white poster board) reflect IR light.
- The receiver will not be able to detect IR light emitted by the transmitter when the robot is over the black electrical tape. Output (OUT pin) of the sensor will be LOW.
- The receiver will detect IR light emitted (and then reflected) by the transmitter when the robot is on top of the white poster board because the white poster board will reflect the IR light (black will absorb the IR light). The output (OUT pin) of the sensor will go HIGH.
- The robot will use the information provided by this sensor to steer itself and follow the black electrical tape line.
- 2 x 2 Lego Blocks (available from eBay)
- Female-to-female Jumper Wires
- VELCRO Brand – Thin Clear Fasteners 7/8 in Squares
- Male-to-male Jumper Wires
- Female-to-male Jumper Wires
Directions
Connecting the Infrared Line Sensor
Make sure the Raspberry Pi is turned OFF.
Connect the VCC pin of the IR line sensor to pin 1 of the Raspberry Pi.
Connect the GND pin of the line sensor to the blue (negative) ground rail of the solderless breadboard.
Connect the OUT pin of the line sensor to pin 21 (GPIO 9) of the solderless breadboard.
Testing the Infrared Line Sensor
Power up your Raspberry Pi, and open IDLE.
Create a new program called test_line_following.py.
Save it in to your robot directory.
Here is the code:
import gpiozero import time # Description: Code for testing the # TCRT5000 IR Line Track Follower Sensor # Author: Addison Sears-Collins # Date: 05/29/2019 # Initialize line sensor to GPIO9 line_sensor = gpiozero.DigitalInputDevice(9) while True: if line_sensor.is_active == False: print("Line detected") else: print("Line not detected") time.sleep(0.2) # wait for 0.2 seconds
Create a line-following course using your black electrical tape and your poster board. It should look something like this.
I recommend leaving a 3-inch margin between the side of the poster board and the course. Don’t make curves that are too sharp.
Run your test program from a terminal window.
cd robot
python3 test_line_following.py
Move your track in front of the sensor to see if the terminal prints out “Line detected. “
If you are running into issues, use a screwdriver to adjust the sensitivity of the sensor.
That white and blue potentiometer is what you should tweak.
Connect the other IR line sensor.
Connect the VCC pin of the IR line sensor to pin 17 of the Raspberry Pi using a female-to-female jumper wire.
Connect the GND pin of the IR line sensor to the blue (negative) ground rail of the solderless breadboard.
Connect the OUT pin of the IR line sensor to pin 23 (GPIO 11) of the Raspberry Pi.
Attaching the Sensors
Attach the first IR line sensor you wired up to the front, left side of the robot.
Attach the second IR line sensor to the right side of the robot.
Both sensors need to be just off the ground and can be mounted using 2×2 Lego blocks that extend downward from the body of the robot.
A piece of VELCRO is sufficient to attach both sensors.
Run the wires connected to the IR line sensors down through the gap in the robot body.
Create the Line-Following Program in Python
Open IDLE on your Raspberry Pi.
Create a new file inside your robot directory named
line_following_robot.py
Here is the code:
import gpiozero # File name: line_following_robot.py # Code source (Matt-Timmons Brown): # Date created: 5/29/2019 # Python version: 3.5.3 # Description: Follow a line using a TCRT5000 IR # Line Following Sensor robot = gpiozero.Robot(left=(22,27), right=(17,18)) left = gpiozero.DigitalInputDevice(9) right = gpiozero.DigitalInputDevice(11) while True: if (left.is_active == True) and (right.is_active == True): robot.forward() elif (left.is_active == False) and (right.is_active == True): robot.right() elif (left.is_active == True) and (right.is_active == False): robot.left() else: robot.stop()
Deploying Your Line-Following Robot
Place your robot on your track. Make sure the line-following sensors are directly above the black line.
Verify that your Raspberry Pi is connected to battery power, and your 4xAA battery pack is turned on.
Run your program from inside the robot directory.
cd robot
python3 line_following_robot.py
Watch your robot follow the line! Press CTRL-C anytime to stop the program. | https://automaticaddison.com/tag/raspberry-pi/page/2/ | CC-MAIN-2021-21 | refinedweb | 878 | 68.16 |
Adding static typing and scope validation, part 2: type inference and validation
This post continues my series describing how I solved certain problems while creating a toy programming language. Today I’ll discuss static typing and type inference.
Tracking types
To have static typing, each syntax tree node needs to track what kind of type it is. Integers are integers, words are resolved user defined types, quoted strings are strings. But terminals are not the only nodes with types. Each syntax trees type is derived from its terminals. For example the expression syntax tree that represents the following:
(1 + 2) == 5
Is actually this:
== / \ + 5 / \ 1 2
We have a tree that has an expression on the left, and a literal on the right. We need to know what the expression on the lefts type is before we can do anything. Since the scope builder is depth first and each syntax tree’s type is composed of its internal tree types, we’ll be guaranteed that by the time we evaluate the type of the
== tree we’ll know that the left hand side is an integer type.
Even though the left hand side expression is an integer, and the right hand side literal is an integer, the expression is held together with an
== token which means it will be a boolean type. Anything that uses this expression further up the tree can now know that this expression is of type boolean. This is useful info because we probably want to validate that, for example,
if statements predicates only have boolean types. Same with
while loops, and parts of a
for loop, and
else, etc.
Type Assignments
For each expression, while resolving types, I set the current expression trees type to be derived from its branches. If we have a leaf, then either resolve the type (if it is a user defined variable) or create a type describing it:
[Note: when types are resolved, how symbols are created, and solving forward references is coming in part 3]
public void Visit(Expr ast) { if (ast.Left != null) { ast.Left.Visit(this); } if (ast.Right != null) { ast.Right.Visit(this); } SetScope(ast); if (ast.Left == null && ast.Right == null) { ast.AstSymbolType = ResolveOrDefine(ast); } else { if (ResolvingTypes) { ast.AstSymbolType = GetExpressionType(ast.Left, ast.Right, ast.Token); } } }
The expression visit function calls a helper method that takes the left and right trees as well as the current token
/// <summary> /// Determines user type /// </summary> /// <param name="left"></param> /// <param name="right"></param> /// <param name="token"></param> /// <returns></returns> private IType GetExpressionType(Ast left, Ast right, Token token) { switch (token.TokenType) { case TokenType.Ampersand: case TokenType.Or: case TokenType.GreaterThan: case TokenType.Compare: case TokenType.LessThan: case TokenType.NotCompare: return new BuiltInType(ExpressionTypes.Boolean); case TokenType.Method: case TokenType.Infer: if (right is MethodDeclr) { return new BuiltInType(ExpressionTypes.Method, right); } return right.AstSymbolType; } if (!ResolvingTypes && (left.AstSymbolType == null || right.AstSymbolType == null)) { return null; } if (!TokenUtil.EqualOrPromotable(left.AstSymbolType.ExpressionType, right.AstSymbolType.ExpressionType)) { throw new Exception("Mismatched types"); } return left.AstSymbolType; }
Each expression knows what kind of type it should have (based on the infix token) and can validate that its left and right branches match accordingly. Other parts of the scope builder who use these expressions can now determine their types as well (such as function invokes, method declarations, return statements, etc) since the expression is tagged with a type. Anything that can be used as part of another statement needs to have a type associated to it.
Type inference and validation
Type inference now is super easy. If the left hand side is a type of
var we don’t try to do anything with it, we’ll just give it the same type that the right hand side has. For example, in the variable declaration syntax tree visitor I have a block that looks kind of like this:
// if its type inferred, determine the declaration by the value's type if (ast.DeclarationType.Token.TokenType == TokenType.Infer) { ast.AstSymbolType = ast.VariableValue.AstSymbolType; var symbol = ScopeUtil.DefineUserSymbol(ast.AstSymbolType, ast.VariableName); DefineToScope(ast, symbol); }
Type validation is also easy, all we have to do is check if the right hand side is assignable to the left hand side. To type check any other (non-expression) syntax trees, like
AstSymbolType property that has an enum describing its type that we can use for type checking.
Inferring method return types
In my language I decided I would allow functions to be declared with a
var type inferred return type. This means I had to infer its type from its return value. This isn’t quite as easy as asking the method declaration tree where its return value is yet, since you have to go find it in the tree. What I did to find it was, while iterating over the source tree, keep track of if I’m inside of a method. The scope builder keeps a single heap allocated property called
private MethodDeclr CurrentMethod { get; set; }
Each time I encounter a method declaration (either by an anonymous lambda or a class method or an inline method), I update this property. I also keep track of what the previous method was on the stack. This way as the scope builder iterates through the tree it always knows what is the current method. When a method is done iterating it’ll set
CurrentMethod to the last method it knew about (or null if there was no method it was inside of)
To help with tracking return statements, I’ve also added some extra metadata to the
MethodDeclr AST so every method declaration can now access the syntax tree that represents its return statement directly:
public class MethodDeclr : Ast { /// ... public ReturnAst ReturnAst { get; private set; } /// ... }
During the course of tree iteration, if there is a
return statement we’ll end up hitting the
ReturnAst visit method and we can tag the current method’s return statement with it:
public void Visit(ReturnAst ast) { if (ast.ReturnExpression != null) { ast.ReturnExpression.Visit(this); ast.AstSymbolType = ast.ReturnExpression.AstSymbolType; CurrentMethod.ReturnAst = ast; } }
Here is my
MethodDeclr visit method
public void Visit(MethodDeclr ast) { var previousMethod = CurrentMethod; CurrentMethod = ast; var symbol = ScopeUtil.DefineMethod(ast); Current.Define(symbol); ScopeTree.CreateScope(); ast.Arguments.ForEach(arg => arg.Visit(this)); ast.Body.Visit(this); SetScope(ast); if (symbol.Type.ExpressionType == ExpressionTypes.Inferred) { if (ast.ReturnAst == null) { ast.AstSymbolType = new BuiltInType(ExpressionTypes.Void); } else { ast.AstSymbolType = ast.ReturnAst.AstSymbolType; } } else { ast.AstSymbolType = symbol.Type; } ValidateReturnStatementType(ast, symbol); ScopeTree.PopScope(); CurrentMethod = previousMethod; }
Let’s trace through it:
- Lines 3 and 5. Keep track of the previous method we came from (if any), and set the current method to point to the syntax tree we’re on
- Lines 7 and 9. Create a method symbol and define it in the current scope. This makes the method visible to anything in the same scope
- Line 11. Create a new scope. All internal method arguments and statements are inside of their own scope
- Lines 13 and 15. Visit the arguments and body
- Line 17. Set the method syntax tree’s scope to the current (so now it points to the current scope and can access this later).
- Lines 19 through 29. If the method is a type inferred return type, set the methods type to be the same as the return statement types. If there isn’t a return statement, make it a void.
- Line 32. If its not a type inferred return type, set the type of the method syntax tree to be the same as the symbol we created for it.
- Line 35.
ValidateReturnStatementchecks to make sure that the declared type matches the return statement type. If we declared a function to return string but we are returning a user object, then that’s going to generate a compiler error. >/li>
- Line 37. We’re done with this method scope so pop the scope off the stack
- Line 39. Reset the previous current method to the tracking property
Now we’ve properly validated the method declaration type with its return value, and if we needed to type inferred the method from its return statement. So, as an example, we can support something like this:
[Test] public void TestTypeInferFunctionReturn() { var test = @" var func(){ return 'test'; } "; var ast = (new LanguageParser(new Lexers.Lexer(test)).Parse() as ScopeDeclr); var function = ast.ScopedStatements[0] as MethodDeclr; new InterpretorVisitor().Start(ast); Console.WriteLine("Inferred return type: " + function.AstSymbolType.ExpressionType); Console.WriteLine("Original declared return expression type: " + function.MethodReturnType); }
Which prints out
Inferred return type: String Original declared return expression type: Infer: var
Conclusion
Now the language has static typing and type inference. It doesn’t have type promotion, but now that all the types are defined and propagated through the tree its easy to add. Next I’ll talk about forward references and how I solved that problem. As always, check the github for full source, examples, and tests. | http://onoffswitch.net/adding-static-typing-and-scope-validation-part-2-type-inference-and-validation/ | CC-MAIN-2018-13 | refinedweb | 1,484 | 55.95 |
Private interface used to implement the BLE class. More...
#include <BLEInstanceBase.h>
Private interface used to implement the BLE class.
The BLE class delegates all its abstract operations to an instance of this abstract class, which every vendor port of Mbed BLE shall implement.
The vendor port shall also define an implementation of the freestanding function createBLEInstance(). The BLE API uses this singleton function to gain access to a concrete implementation of this class defined in the vendor port.
Definition at line 54 of file BLEInstanceBase.h.
Base constructor.
Definition at line 60 of file BLEInstanceBase.h.
Virtual destructor of the interface.
Accessor to the vendor implementation of the Gap interface.
Const alternative to getGap().
Accessor to the vendor implementation of the GattClient interface.
Accessor to the vendor implementation of the GattServer interface.
A const alternative to getGattServer().
Accessor to the vendor implementation of the SecurityManager interface.
A const alternative to getSecurityManager().
Fetches a NULL terminated string representation of the underlying BLE vendor subsystem.
Check whether the vendor BLE subsystem has been initialized or not.
Start the initialization of the vendor BLE subsystem.
Calls to this function are initiated by BLE::init, instanceID identify the BLE instance which issue that call while the initCallback is used to signal asynchronously the completion of the initialization process.
This is an optional parameter set to NULL when not supplied.
Process ALL pending events living in the vendor BLE subsystem.
Return once all pending events have been consumed.
Shutdown the vendor BLE subsystem.
This operation includes purging the stack of GATT and GAP state and clearing all state from other BLE components, such as the SecurityManager. Clearing all states may be realized by a call to Gap::reset(), GattClient::reset(), GattServer::reset() and SecurityManager::reset().
BLE::init() must be called afterward to reinstantiate services and GAP state.
Signal to BLE that events needing processing are available.
The vendor port shall call this function whenever there are events ready to be processed in the internal stack or BLE subsystem. As a result of this call, the callback registered by the end user via BLE::onEventsToProcess will be invoked.
Process pending events present in the vendor subsystem; then, put the MCU to sleep until an external source wakes it up. | https://os.mbed.com/docs/mbed-os/v6.14/feature-i2c-doxy/class_b_l_e_instance_base.html | CC-MAIN-2022-33 | refinedweb | 374 | 50.73 |
On 05/09/2012 02:48 AM,. > I fail to see the need to ever report both a loader and a portion, > as well as the need to report multiple portions, for a single sys.path > item. That sounds like an unnecessary complication. As Nick said, you'd return a loader, a list of portions, or neither. it would be an error to return both. I'm mildly sympathetic to not wanting to inspect either the type or attributes of the returned value to figure out which is being returned. A callback to specify the portions seem needlessly complex and a hassle for a C implementation. My compromise is to return a tuple. I don't think a tuple is much of a burden. It's not like writing finders which support namespace portions will be a common activity. | https://mail.python.org/pipermail/import-sig/2012-May/000595.html | CC-MAIN-2016-40 | refinedweb | 139 | 66.64 |
How do I generate a set of random numbers without producing doubles? For example: I want 6 random integers from 1 to 6, but I cannot have doubles (i.e. I can't have 661234 or 222222).
How do I generate a set of random numbers without producing doubles? For example: I want 6 random integers from 1 to 6, but I cannot have doubles (i.e. I can't have 661234 or 222222).
Loop and generate random number into a temp variable. Have a function which check whether that temp variable is within a range of an array you are storing those numbers in. If it is not, append it to the array.
The algorithm is so fast that the most correct approach is to simply check the value against an array or list that gets populated every time a new unique number is generated.
The final list is your random generated numbers by the order you generated themThe final list is your random generated numbers by the order you generated themCode:pseudo-code: while (list.size() != (max possible number - min possible number + 1)) { result = Rand(); if (result is unique) add result to list of already generated uniques; else continue; }
Originally Posted by brewbuck:
Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster.
You could also learn how to properly use rand so that double numbers do not occur so easily.
I would fill your container with the possible values, then shuffle the container. The values will be in a random order and there will be no duplicates. This is much better than getting a random number and checking to see if it has already been chosen.
To shuffle the container, use random_shuffle. If you aren't allowed to use random_shuffle, then the algorithm it uses is fairly basic but easy to get wrong. Post your attempt and/or ask for help if you go this route.
Daved, I have decided to go your route, but I am not sure how to implement it. The only example I could find used vectors and I'm not very familiar with vectors. I was also wondering - is there a way to shuffle characters (char arrayofcharacters[6]) instead of integers (because that's what I really need and it would skip a step)?
Arrays can be used with standard algorithms. Iterator first is the first element, iterator last is the last element: you just have to use some pointer arithmetic.
To call random_shuffle with an array of 6 characters you would use random_shuffle(arrayofcharacters, arrayofcharacters + 6); since arrayofcharacters points to the first element and arrayofcharacters + 6 points to one past the last element, which is what random_shuffle expects as its arguments. Before you do that you have to set each value in the array.
Also, for many implementations of random_shuffle, you should call srand(time(0)) once at the start of your program just like you would if you were using rand().
Ok thanks a bunch y'all.
Ok, I tried random_shuffle, and it shuffled it, but it seems it doesn't shuffle it well enough (possibly srand). This never stops and says "works" when I input "abcdef" (unless I am mistaken, it should work on average every 6! random_shuffles):
And this never outputs "mad" when I input "dam"(cin >> trying is only there so that I have time to see the results of random_shuffle):And this never outputs "mad" when I input "dam"(cin >> trying is only there so that I have time to see the results of random_shuffle):Code:#include<iostream> #include<string> #include<cstdlib> #include<ctime> #include<algorithm> using namespace std; int main() { srand(time(0)); cout << "Please enter letters"; char letters[6]; cin >> letters; shuffle: random_shuffle(letters,letters+6); if(letters == "fedcba") cout << "works"; else goto shuffle; return 0; }
I figured it might be using the same seed so I tried putting srand(time(0)) in the while loop but that still didn't work. In fact, I keep getting the same results every time (dam, dma, mda, etc. but never mad).I figured it might be using the same seed so I tried putting srand(time(0)) in the while loop but that still didn't work. In fact, I keep getting the same results every time (dam, dma, mda, etc. but never mad).Code:#include<iostream> #include<string> #include<cstdlib> #include<ctime> #include<algorithm> using namespace std; int main() { srand(time(0)); char letters[3]; cin >> letters; char trying; while(true) {random_shuffle(letters,letters+3); cout << letters; cin >> trying;} return 0; }
Last edited by ldb88; 06-20-2006 at 08:35 PM.
No. Your first example will never return.
The first problem is letters: It's not big enough. You're trying to fit a string, length 6, in a size 6 array. What about \0 terminator? \n from the cin? "Where thou typest foo, surely someday a user shall typeth supercalif..."
The reason it won't return though is your if() statement: You're not comparing STL strings, you're comparing a character array to a string constant, essentially two pointers, two values that will be the same for the entire runtime of the program, and two values which are not equal.
And it should (if coded right) return after 720 loops, on average.
EDIT:
That'll work... except I get ~434 (should it not be 6! ?) iterations after 20 trials, perhaps I did my probability wrong.That'll work... except I get ~434 (should it not be 6! ?) iterations after 20 trials, perhaps I did my probability wrong.Code:int main() { int i = 0; srand(time(0)); cout << "Please enter letters"; char letters[7] = {0}; cin >> letters; letters[6] = '\0'; shuffle: ++i; random_shuffle(letters,letters+6); cout << letters << endl; if(strcmp(letters, "fedcba") == 0) cout << "works"; else goto shuffle; cout << endl << i << "iterations" << endl; return 0; }
Oh, and you'll get your hands slapped for using goto that way...
Last edited by Cactus_Hugger; 06-20-2006 at 09:24)
Thank you. I agree, it should be 6!. I was planning on adding multiple possibilities (other than fedcba) for letters, so I was going to use goto in place of a switch (which I believe can only be used with int). Please correct me if I'm wrong - I have only been programming for the past two months in my spare time, so I am very new to all this.
You might as well use a std::string instead of a char[7], and replace your goto loop with a do while loop.
Code:#include <algorithm> #include <string> #include <cstdlib> #include <ctime> #include <iostream> int main() { srand(time(0)); std::cout << "Please enter letters"; std::string letters; std::cin >> letters; int i = 0; do { std::random_shuffle(letters.begin(), letters.end()); std::cout << letters << std::endl; ++i; } while (letters != "fedcba"); std::cout << "works\n" << i << "iterations" << std::endl; return) | http://cboard.cprogramming.com/cplusplus-programming/80256-randomization.html | CC-MAIN-2016-30 | refinedweb | 1,162 | 61.97 |
Difference between revisions of "Foreign Function Interface"
Revision as of 22:26, 26 August 2018
Contents
- 1 Introduction
- 2 Generalities
- 3 Function pointers
- 4 Marshalling data
- 5 Tools
- 6 Linking
- 7 Enhancing performance and advanced topics
- 8 History
- 9 References
- 10 TODO
Introduction
The Foreign Function Interface (FFI) allows Haskell programs to cooperate with programs written with other languages. Haskell programs can call foreign functions and foreign functions can call Haskell code.
Compared to many other languages, Haskell FFI is very easy to use: in the most common case, you only have to translate the prototype of the foreign function into the equivalent Haskell prototype and you're done. For instance, to call the exponential function ("exp") of the libc, you only have to translate its prototype:
double exp(double);
into the following Haskell code
foreign import ccall "exp" c_exp :: Double -> Double
Now you can use the function "c_exp" just like any other Haskell function. When evaluated, it will call "exp".
Similarly, to export the following Haskell function:
triple :: Int -> Int triple x = 3*x
so that it can be used in foreign codes, you only have to write:
foreign export ccall triple :: Int -> Int
It can get at little more complicated depending on what you want to do, the function parameters, the foreign code you target, etc. This page is here to explain all of this to you.
Generalities
FFI extension
The Foreign Function Interface (FFI) is an extension to the Haskell standard. To use it, you need to enable it with the following compiler pragma at the beginning of your source file:
{-# LANGUAGE ForeignFunctionInterface #-}
Calling conventions
When a program (in any language) is compiled into machine code, functions and procedures become labels: a label is a symbol (a string) associated to a position into the machine code. Calling a function only consists in putting parameters at appropriate places into memory and registers and then branching at the label position. The caller needs to know where to store parameters and the callee needs to know where to retrieve parameters from: there is a calling convention.
To interact with foreign code, you need to know the calling conventions that are used by the other language implementation on the given architecture. It can also depend on the operating system.
GHC supports standard calling conventions with the FFI: it can generate code to convert between its internal (non-standard) convention and the foreign one. If we consider the previous example:
foreign import ccall "exp" c_exp :: Double -> Double
we see that the C calling convention ("ccall") is used. GHC will generate code to put (and to retrieve) parameters into memory and registers conforming to what is expected by a code generated with a C compiler (or any other compiler conforming to this convention).
Other available conventions supported by GHC include "stdcall" (i.e. Pascal convention).
Foreign types
Calling conventions depend on parameter types. For instance, floating-point values (Double, Float) may be passed into floating-point registers. Several values can be combined into a single vector register. And so on. As an example, in you can find the algorithm describing how to pass parameters to functions on Linux on a x86-64 architecture depending on the types of the parameters.
Only some Haskell types can be directly used as parameters for foreign functions, because they correspond to basic types of low-level languages such as C and are used to define calling conventions.
According to [1], the type of a foreign function is a foreign type, that is a function type with zero or more arguments where:
- the argument types can be marshallable foreign types, i.e. Char, Int, Double, Float, Bool, Int8, Int16, Int32, Int64, Word8, Word16, Word32, Word64, Ptr a, FunPtr a, StablePtr a or a renaming of any of these using newtype.
- the return type is either a marshallable foreign type or has the form IO t where t is a marshallable foreign type or ().
Warning: GHC does not support passing structures as values yet.
The Foreign.C.Types module contains renaming of some of these marshallable foreign types with names closer to those of C types (e.g. CLong, CShort).
If the foreign function performs side-effects, you have to explicitly indicate it in its type (using IO). GHC has no other way to detect it.
foreign import ccall "my_func" myFunc :: Int -> IO Double
Data structures have to passed by reference (using pointers). We will see how to use them later in this document.
Exported functions
GHC can generate wrappers so that a foreign code can call Haskell code:
triple :: Int -> Int triple x = 3*x foreign export ccall triple :: Int -> Int
In the generated binary object, there will be a label "triple" that can be called from a language using the C convention.
Note that to call a Haskell function, the runtime system must have been initialized with a call to "hs_init". It must be released with a call to "hs_exit" when it is no longer required.
See the user guide for more details.
Function pointers
Sometimes you want to manipulate foreign pointers to foreign functions: these are FunPtr in Haskell.
You can get a function pointer by using "&" before a foreign function symbol:
foreign import ccall "&exp" a_exp :: FunPtr (Double -> Double)
Some foreign functions can also return function pointers.
To call a function pointer from Haskell, GHC needs to convert between its own calling convention and the one expected by the foreign code. To create a function doing this conversion, you must use "dynamic" wrappers:
foreign import ccall "dynamic" mkFun :: FunPtr (Double -> Double) -> (Double -> Double)
Then you can apply this wrapper to a FunPtr to get a Haskell function:
c_exp :: Double -> Double c_exp = mkFun a_exp
You can also perform the opposite operation to give to the foreign code a pointer to a Haskell function. You need a "wrapper" wrapper. GHC generates the callable code to execute the wrapped Haskell closure with the appropriate calling convention and returns a pointer (FunPtr) on it. You have to release the generated code explicitly with `freeHaskellFunPtr` to avoid memory leaks: GHC has no way to know if the function pointer is still referenced in some foreign code, hence it doesn't collect it.
add :: Int -> Int add = x+y foreign import ccall "wrapper" createAddPtr :: (Int -> Int) -> IO (FunPtr (Int -> Int)) main = do addPtr <- createAddPtr add -- you can use addPtr like any other FunPtr (e.g. give it to foreign code) ... -- you MUST free the FunPtr, otherwise it won't be collected freeHaskellFunPtr addPtr
Marshalling data
In Haskell we are accustomed to let the runtime system -- especially the garbage collector -- manage memory. When we use the FFI, however, we sometimes need to do some manual memory management to comply with the data representations of the foreign codes. Hopefully, Haskell makes it very easy to manipulate low-level objects such as pointers. Moreover, many useful Haskell tools have been designed to simplify conversions between data representations.
Pointers
A pointer is an offset in memory. In Haskell, it is represented with the Ptr a data type. Where "a" is a phantom type that can be used to differentiate two pointers. You can think of "Ptr Stuff" as being equivalent to a "Stuff *" type in C (i.e. a pointer to a "Stuff" data). This analogy may not hold if "a" is a Haskell type not representable in the foreign language. For instance, you can have a pointer with the type "Ptr (Stuff -> OtherStuff)" but it is not function pointer in the foreign language: it is just a pointer tagged with the "Stuff -> OtherStuff" type.
You can easily cast between pointer types using `castPtr` or perform pointer arithmetic using `plusPtr`, `minusPtr` and `alignPtr`. NULL pointer is represented with `nullPtr`.
Memory allocation
There are basically two ways to allocate memory:
- on the Haskell heap, using `alloca*` functions in Foreign.Marshal.Alloc
The allocation is ephemeral: it lasts the time of the execution of an IO action, as in the following example:
do allocaBytes 128 $ \ptr -> do -- do stuff with the pointer ptr... -- ... -- do not return "ptr" in any way because it will become an invalid pointer -- here the 128 bytes have been released and should not be accessed
- on the "low-level" heap (the same as the runtime system uses), using `malloc*` functions in Foreign.Marshal.Alloc
Allocations on the low-level heap are not managed by the Haskell implementation and must be freed explicitly with `free`.
do ptr <- mallocBytes 128 -- do stuff with the pointer ptr... -- ... free ptr -- here the 128 bytes have been released and should not be accessed
Foreign Pointers
An hybrid approach is to use ForeignPtr. Foreign pointers are similar to Ptr except that they have finalizers (i.e. actions) attached to them. When the garbage collector detects that a ForeignPtr is no longer accessible, it executes its associated finalizers. A basic finalizer is `finalizerFree` [2] that calls `free` on the pointer.
You can convert a Ptr into a ForeignPtr using `newForeignPtr`, add additional finalizers, etc. [3].
In the following example, we use `mallocForeignPtrBytes`. It is equivalent to call `malloc` and then to associate the `finalizerFree` finalizer with `newForeignPtr`. GHC has optimized implementations for `mallocForeignPtr*` functions, hence they should be preferred.
do ptr <- mallocForeignPtrBytes 128 -- do stuff with the pointer ptr... -- ... -- -- ptr is freed when it is collected
Using pointers: Storable instances
You often want to read or to write at the address a of pointer. Reading consists in obtaining a Haskell value from a pointer; writing consists in somehow writing a representation of the Haskell value at the pointed address. Writing and reading a value depends on the type of the value, hence these methods are encapsulated into the Storable type class.
For any type T such that it exists a Storable T instance:
- you can read a value, using
peek :: Ptr T -> IO T
- you can write a value, using
poke :: Ptr T -> T -> IO ()
`Storable a` also defines a `sizeOf :: a -> Int` method that returns the size of the stored value in bytes.
All the marshallable foreign types (i.e. basic types) have Storable instances. Hence we can use these to write new Storable instances for more involved data types. In the following example, we create a Storable instance for a Complex data type:
data Complex = Complex Double Double instance Storable Complex where sizeOf _ = 2 * sizeOf (undefined :: Double) -- stored complex size = 2 * size of a stored Double peek ptr = do real <- peek ptr img <- peekByteOff ptr (sizeOf real) -- we skip the bytes containing the real part return $ Complex real img poke ptr (Complex real img) = do poke ptr real pokeByteOff ptr (sizeOf real) img ...
This is not very complicated but it can become very cumbersome if our data type has many fields. Several tools have been developed to automatically or semi-automatically create the Storable instances.
Renaming and Storable instances
It is very common to use type renaming (i.e. newtype) to wrap a data type as in the following example, where we declare a type Pair that contains a pair of Double values just like our Complex type.
newtype Pair = Pair Complex
If we want to store Pair values exactly like Complex values, we have several possibilities:
- unwrap the Complex value each time we want to use its Storable instance
- create a new Storable Pair instance that does the same thing
- automatically derive the Storable Pair instance
The last solution is obviously the simplest one. It requires an extension however:
{-# LANGUAGE GeneralizedNewtypeDeriving #-} newtype Pair = Pair Complex deriving (Storable)
Arrays
It is very common to read and to write arrays of values. Foreign.Marshal.Array provides many functions to deal with pointers to arrays. You can easily write an Haskell list of storable values as an array of values, and vice versa.
Strings
Strings in Haskell are lists of Char, where Char represents a unicode character. Many foreign codes use the C representation for strings (CString in Haskell): an array of bytes where each byte is a extended ASCII character terminated with a NUL character.
In Foreign.C.String, you have many functions to convert between both representations. Be careful because Unicode characters outside of the ASCII range may not be representable with the C representation.
foreign import ccall "echo" c_echo :: CString -> IO () echo :: String -> IO () echo str = withCString str $ \c_str -> c_echo c_str
Data structures
Marshalling data structures of foreign languages is the most cumbersome task: you have to find out the offset of each field of the data structure (considering padding bytes, etc.). Hopefully, there are Haskell tools to help with this task.
Suppose you have a C data structure like this:
struct MyStruct { double d; char c; int32_t i; };
And its Haskell counterpart:
data MyStruct = MyStruct { d :: Double , c :: Word8 , i :: Int32 }
The following sub-sections present the different ways to write the Storable instance for MyStruct.
The following header is implied:
import Control.Applicative ((<$>), (<*>)) import Foreign.Storable
The explicit way
instance Storable MyStruct where alignment _ = 8 sizeOf _ = 16 peek ptr = MyStruct <$> peekByteOff ptr 0 <*> peekByteOff ptr 8 <*> peekByteOff ptr 12 -- skip padding bytes after "c" poke ptr (MyStruct d c i) = do pokeByteOff ptr 0 d pokeByteOff ptr 8 c pokeByteOff ptr 12 i
- The structure alignment is the least common multiple of the alignments of the structure fields. The alignment of primitive types is equal to their size in bytes (e.g. 8 for Double, 1 for Word8 and 4 for Word32). Hence the alignment for MyStruct is 8.
- We indicate the offset of each field explicitly for peek and poke methods. We introduce padding bytes to align the "i" field (Word32) on 4 bytes. A C compiler does the same thing (except for packed structures).
- The size of the structure is the total number of bytes, including padding bytes between fields.
hsc2hs
hsc2hs is a tool that can help you compute field offsets by using C headers directly.
Save your Haskell file with a .hsc extension to enable the support of hsc2hs.
#include <myheader.h> instance Storable MyStruct where peek ptr = MyStruct <$> (#peek MyStruct, d) ptr <*> (#peek MyStruct, c) ptr <*> (#peek MyStruct, i) ptr ...
c2hs
c2hs is another tool that can help you doing the same thing as hsc2hs for data structure marshalling. They have differences in other aspects though.
#include <myheader.h> instance Storable MyStruct where peek ptr = MyStruct <$> {#get MyStruct->d} ptr <*> {#get MyStruct->c} ptr <*> {#get MyStruct->i} ptr ...
c-storable-deriving
You can also derive the Storable instances from the types of the fields and their order in the Haskell data type by using c-storable-deriving package.
{-# LANGUAGE DeriveGeneric #-} import GHC.Generics (Generic) import Foreign.CStorable data MyStruct = MyStruct {...} deriving (Generic) instance CStorable MyStruct instance Storable MyStruct where sizeOf = cSizeOf alignment = cAlignment poke = cPoke peek = cPeek
The CStorable type-class is equivalent to the Storable type-class but has additional default implementations for its methods if the type has an instance of Generic.
Pointers to Haskell data
In some cases, you may want to give to the foreign code an opaque reference to a Haskell value that you will retrieve later on. You need to be sure that the value is not collected between the time you give it and the time you retrieve it. Stable pointers have been created exactly to do this. You can wrap a value into a StablePtr and give it to the foreign code (StablePtr is one of the marshallable foreign types).
You need to manually free stable pointers using `freeStablePtr` when they are not required anymore.
Tools
There are several tools to help writing bindings using the FFI. In particular by using C headers.
Using C headers
Haskell
Dynamic function call
Linking
There are several ways for GHC to find the foreign code to link with:
- Static linking: the foreign code binary object is merged with the Haskell one to form the final executable
- Dynamic linking: the generated binary uses libraries (e.g. .dll or .so) that are automatically linked when the binary is executed
- Explicit dynamic linking: the Haskell code explicitly loads libraries, finds symbols in them and makes calls to them
The first two modes are well described in GHC and Cabal manuals. For the last one, you need to use platform dependent methods:
- on UNIX, you can use System.Posix.DynamicLinker
Explicit dynamic linking helps you obtaining function pointers (FunPtr). You need to write "dynamic" wrappers to call the functions from Haskell.
Dynamic linker template
dynamic-linker-template is a package that uses template Haskell to automatically generate "dynamic" wrappers for explicit dynamic linking (only supporting Unix for now).
The idea is that a library is like a record containing functions, hence it is easy to generate the code that load symbols from a library and store them into a Haskell record.
In the following code, the record matching library symbols is the data type MyLib. The generated code will apply "myModifier" to each field name of the record to find corresponding symbols in the library. myModifier should often be "id" but it is sometimes useful when symbols are not pretty. Here in the foreign code "_v2" is appended at the end of each symbol to avoid symbol clashes with the first version of the library.
The package supports optional symbols: functions that may or may not be present in the library. These optional functions are represented by encapsulating the function type into Maybe.
The `libHandle` field is mandatory and contains a pointer to the loaded library. You can use it to unload the library.
A function called `loadMyLib` is generated to load symbols from a library, wrap them using "dynamic" wrappers and store them into a MyLib value that is returned.
{-# LANGUAGE TemplateHaskell, ForeignFunctionInterface #-} import System.Posix.DynamicLinker.Template data MyLib = MyLib { libHandle :: DL, thing1 :: Double -> IO Int, -- Mandatory symbol thing2 :: Maybe (Int -> Int -> Int) -- Optional symbol } myModifier :: String -> String myModifier = (++ "_v2") $(makeDynamicLinker ''MyLib CCall 'myModifier) -- Load your library with: -- loadMyLib :: FilePath -> [RTLDFlags] -> IO MyLib
Enhancing performance and advanced topics
To enhance performance of a call to a foreign function, you first need to understand how GHC runtime system works. GHC uses user-space threads. It uses a set of system threads (called "Tasks"). Each system thread can execute a "capability" (i.e. a user-space thread manager). User-space threads are distributed on capabilities. Each capability executes its associated user-space threads, one at a time, using cooperative scheduling or preemption if necessary.
All the capabilities have to synchronize to perform garbage collection.
When a FFI call is made:
- the user-space thread is suspended (indicating it is waiting for the result of a foreign call)
- the current system thread executing the capability executing the user-space thread releases the capability
- the capability can be picked up by another system thread
- the user-space threads that are not suspended in the capability can be executed
- garbage collection can occur
- the system thread executes the FFI call
- when the FFI call returns, the user-space thread is woken up
If there are too many blocked system threads, the runtime system can spawn new ones.
Unsafe calls
All the capability management before and after a FFI call adds some overhead. It is possible to avoid it in some cases by adding the "unsafe" keyword as in the following example:
foreign import ccall unsafe "exp" c_exp :: Double -> Double
By doing this, the foreign code will be directly called but the capability won't be released by the system thread during the call. Here are the drawbacks of this approach:
- if the foreign function blocks indefinitely, the other user-space threads of the capability won't be executed anymore (deadlock)
- if the foreign code calls back into the Haskell code, a deadlock may occur
- it may wait for a value produced by one of the locked user-space threads on the capability
- there may not be enough capabilities to execute the code
Foreign PrimOps
If unsafe foreign calls are not fast enough for you, you can try the GHCForeignImportPrim extension.
{-# LANGUAGE GHCForeignImportPrim, MagicHash, UnboxedTuples, UnliftedFFITypes #-} import GHC.Base import GHC.Int -- Primitive with type :: Int -> Int -> IO Int foreign import prim "my_primop_cmm" my_primop# :: Int# -> Int# -> State# RealWorld -> (# State# RealWorld, Int# #) my_primop :: Int64 -> Int64 -> IO Int64 my_primop (I64# x) (I64# y) = IO $ \s -> case (my_primop# x y s) of (# s1, r #) -> (# s1, I64# r #)
Then you have to write your foreign function "my_primop_cmm" using C-- language used internally by GHC.
As an alternative, if you know how C-- is compiled on your architecture, you can write code in other languages. For instance directly in assembly or using C and LLVM.
Here is a comparison of the different approaches on a specific case.
Bound threads
Some foreign codes use (system) thread-local storage. Some others are not thread-safe. In both case, you have to be sure that the same system thread executes the FFI calls. To control how user-space threads are scheduled on system threads, GHC provide bound threads. Bound threads are user-space threads (Haskell threads) that can only be executed by a single system thread.
Note that bound threads are more expensive to schedule than normal threads. The first thread executing "main" is a bound thread.
Inline FFI calls
If you want to make a one-shot FFI call without the hassle of writing the foreign import, you can use the following technique (using Template Haskell).
In AddTopDecls.hs:
{-# LANGUAGE TemplateHaskell #-} module AddTopDecls where import Language.Haskell.TH import Language.Haskell.TH.Syntax importDoubleToDouble :: String -> ExpQ importDoubleToDouble fname = do n <- newName fname d <- forImpD CCall unsafe fname n [t|Double -> Double|] addTopDecls [d] [|$(varE n)|]
In your module:
{-# LANGUAGE TemplateHaskell #-} module Main where import Language.Haskell.TH import Language.Haskell.TH.Syntax import AddTopDecls main :: IO () main = do print ($(importDoubleToDouble "sin") pi) print ($(importDoubleToDouble "cos") pi)
History
Header inclusion
In old versions of GHC (6.8.3 and earlier), the compiler was able to check the prototypes of the foreign imports by including C header files into the generated C code. For instance, you could write:
{-# INCLUDE <math.h> #-}
or
foreign import ccall "math.h sin" c_sin :: Double -> Double
to include the "math.h" header.
This is deprecated in GHC. Nevertheless you may still find examples using this syntax so it is good to know that it has been used. Moreover, other compilers may still use this feature.
- Justification of the deprecation from the GHC 6.10.1 manual:
"C functions are normally declared using prototypes in a C header file. Earlier versions of GHC (6.8.3 and earlier) #included the header file in the C source file generated from the Haskell code, and the C compiler could therefore check that the C function being called via the FFI was being called at the right type.
GHC no longer includes external header files when compiling via C, so this checking is not performed. The change was made for compatibility with the native code backend (-fasm) and to comply strictly with the FFI specification, which requires that FFI calls are not subject to macro expansion and other CPP conversions that may be applied when using C header files. This approach also simplifies the inlining of foreign calls across module and package boundaries: there's no need for the header file to be available when compiling an inlined version of a foreign call, so the compiler is free to inline foreign calls in any context.
The -#include option is now deprecated, and the include-files field in a Cabal package specification is ignored."
References
- FFI addendum
- The Foreign Function Interface section of the Haskell 2010 report
- FFI chapter in the GHC user guide
- "Tackling the awkward squad" paper
- "Extending the Haskell FFI with Concurrency" paper (the number of capabilities is now greater than 1)
-
Related links
Old
Blog articles
- Dealing with fragile C libraries (e.g. MySQL) from Haskell
- Simple demonstration of Haskell FFI
- C and Haskell sitting in a tree…
- C2HS example: To save other people frustration
- Cxx foreign function interface; how to link to a C++ library
- Safety first: FFI and threading
TODO
- Fix References section
- Foreign language specific issues
- C++ symbol mangling
- Embedded Objective C
- Precision.
- Linking
- pkgconfig
- cabal
- explicit (ghc parameters)
- cf | https://wiki.haskell.org/index.php?title=Foreign_Function_Interface&diff=prev&oldid=62597&printable=yes | CC-MAIN-2020-45 | refinedweb | 4,018 | 50.67 |
dpm_abortreq - abort a given get, put or copy request
#include <sys/types.h> #include "dpm_api.h" int dpm_abortreq (char *r_token)
dpm_abortreq aborts a given get, put or copy request. The request status and the status of every file belonging to this request and still in the queue is set to DPM_ABORTED. r_token specifies the token returned by a previous get, put or copy request.
This routine returns 0 if the operation was successful or -1 if the operation failed. In the latter case, serrno is set appropriately.
EACCES Permission denied. EFAULT r_token is a NULL pointer. EINVAL The length of r_token exceeds CA_MAXDPMTOKENLEN or the token is unknown. SENOSHOST Host unknown. SEINTERNAL Database error. SECOMERR Communication error. | http://huge-man-linux.net/man3/dpm_abortreq.html | CC-MAIN-2017-22 | refinedweb | 117 | 54.08 |
Hello B2B Gurus,
I am able to process B2B inbound files successfully from Trading Partner --> B2B --> BPEL. When it comes to BPEL i am not able to parse/transform the received XML as i am getting selection failures in assign and empty nodes in transformation. When i look at the input XML payload which i received in ReceiveB2BConsume Payload i observed that i am getting namespace as " xmlns="NS_495C37A0921C418BB66A86A6E75B2CA120070312140549" instead of actual namespace xmlns="urn:oracle:b2b:X12/V4010/856" which is in my XSD as well and i am getting the XML start tag <?xml version="1.0" encoding="UTF-8" ?> 2 times. :
<?xml version="1.0" encoding="UTF-8" ?>
<?xml version="1.0" encoding="UTF-8" ?>
<Transaction-856
<Internal-Properties>
........
...
</Transaction-856>
I went back and checked the XSD which i loaded in the B2B Console and i am having the following namespace
"<xsd:schema"
I am not sure why the XML translated from EDI in B2B console has the different namespace and XML start tag 2 times. Can you please help me resolve the issue. Let me know if i am missing anything.
Thanks in Advance..
Another solution is to change the namespace in the ecs file. This can be done in the B2B document editor when you generate the XSD file. This how we solved this problem.
Regards
Erwin | https://community.oracle.com/message/11243357 | CC-MAIN-2014-15 | refinedweb | 223 | 59.74 |
I'm trying to write a program that asks the user their hours and returns it to the main program, and then their rate of pay and returns it to the main program. I can't run the program because it is saying there is a problem with this part of my code towards the end, it's not liking the quotation mark for some reason. of" The program may have other problems but I don't know because I can only get this far.
It is supposed to have 3 functions in addition the main. when it is done it should display something to this effect.
Pay rate $10.00
Regular Hours 40
Overtime hours 20
Regular pay $400.00
Overtime pay $300.00
Total Pay $700.00
def ask_hours(): return input ("How many hours did you work? ") rate = raw_input ("What is your rate of pay? ") def findrate(): pay = hours * rate overtime = (hours - 40) * (rate * 1.5) totalpay = pay + overtime print "You earned", pay, "regular pay and", overtime "overtime pay which is a total of", totalpay def main() : hours = ask_hours() findrate() main() | https://www.daniweb.com/programming/software-development/threads/115960/3-functions-in-addition-to-main-quotation-prob-maybe-more | CC-MAIN-2018-47 | refinedweb | 183 | 83.25 |
sorry guys. i just got back from my class.
ok the point of this program is to type "java Lotto (mm or sl, which is the type of lotto) (#of tickets)" in the command line
and the output is...
Type: Posts; User: tlckl3m3elmo
sorry guys. i just got back from my class.
ok the point of this program is to type "java Lotto (mm or sl, which is the type of lotto) (#of tickets)" in the command line
and the output is...
Oh like i didn't type anything?
Well I tried typing "java Lotto sl" it just gives me a blank. sl should be my first element for my array args[]
why do you keep on asking me for the full text of the error message? i already did! thats all it says to me. and what do you mean where do i check if there are any elements in the args array?
uh you asked where i test my methods. I compiled it with no error messages. but when i try to run the program it gives me the arrayout of bound exception:0 msg
i did post it
/students/mau1> java Lotto
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 0
at Lotto.main(Lotto.java:67)
this is my error when i type "java Lotto" in my...
oh sorry norm. and itts not comparing 2 strings. i have the boolean method, but that doesnt matter for my loop.
and i test my code in Hills server at my college using SSH.
i haven't solved it. I think there's something wrong with my for loop to generate the number of lotto tickets.
it worked up until i tried putting in the loop.
import java.io.*;
import java.util.*;
public class Lotto
{
//*********commandLine1()**********
public static boolean commandLine1(String str)
{
...
i can't get it to loop the number of tickets that the user puts in. and right now i keep on getting an ArrayoutofBoundException:0 in my main. i can't see what's wrong with it
import java.io.*;...
Why does it give me a 0 when my range is specifically 1~56?
Code:
//**********random()***************
public static int random(int a, int b)
{
return((int)((b -... | http://www.javaprogrammingforums.com/search.php?s=11b59083dd8eeed8501495b3fb845d62&searchid=1272828 | CC-MAIN-2014-52 | refinedweb | 370 | 84.68 |
Use the Calendar class for performing Date related operations, esp the add() and set() methods. A sample snippet in Groovy:
import java.util.* def c = Calendar.getInstance() c.set(Calendar.DAY_OF_MONTH, 2) c.set(Calendar.MONTH, 2) // c now contains the date 2nd of March 2011 def i = 1 while(i <= 7) { c.add(Calendar.DAY_OF_MONTH, -1) println c.getTime() i++ }
OUTPUT: Tue Mar 01 11:31:45 IST 2011 Mon Feb 28 11:31:45 IST 2011 Sun Feb 27 11:31:45 IST 2011 Sat Feb 26 11:31:45 IST 2011 Fri Feb 25 11:31:45 IST 2011 Thu Feb 24 11:31:45 IST 2011 Wed Feb 23 11:31:45 IST 2011
Use the SimpleDateFormat for converting the Date object to a desired representation.
Thanks ~s.o.s~
this works perfect :)
also I came up with another solution, (I was lazy a couple hours ago).
I'm leaving it here.
public static String[] SevenDaysFromNow(){ String[] days = new String[7]; Date today = new Date(); long times = today.getTime(); long oneday = (24 * 60 * 60 * 1000); for(int i=0; i<7; i++){ times -= oneday; Date tmp = new Date(times); String DATE_FORMAT_NOW = "yyyy-MM-dd"; SimpleDateFormat sdf = new SimpleDateFormat(DATE_FORMAT_NOW); String MyDate = sdf.format(tmp); days[i] = MyDate; } return days; }
this will output dates in yyyy-mm-dd (MySQL friendly)
but your solution is awesome.
I will mark this as solved.
thanks for your help | https://www.daniweb.com/programming/software-development/threads/342878/java-dates | CC-MAIN-2017-04 | refinedweb | 238 | 65.83 |
- Issued:
- 2019-09-24
- Updated:
- 2019-09-24
RHBA-2019:2816 - Bug Fix Advisory
Synopsis
OpenShift Container Platform 3.11 bug fix update
Type/Severity
Bug Fix Advisory
Topic
Red Hat OpenShift Container Platform release 3.11.146 3.11.146. See the following advisory for the container images for
this release:
This update fixes the following bugs:
- kuryr-controller could not access the OpenShift API LoadBalancer members with OVN if kuryr-controller was running on master nodes. Now, kuryr-controller is forced to be on infrastructure nodes. As a result, kuryr-controller can now access the OpenShift API LoadBalancers. (BZ#1641647)
- In rare cases, the cluster console would not display a projects list when the user logged in. This was due to a race condition that would cause the project list to fail after logging into the admin console. The user would need to refresh the page to see the list of projects. This race condition has been addressed, and projects now load successfully after logging in. (BZ#1703777)
- Image tags were not provided for some ose-pod image pulls. As a result, multiple image versions could be pulled from the ose-pod image. Now, image tags have been added to the registry_auth and only a single image version for ose-pod is pulled. (BZ#1725938)
- Clusters with large numbers of unidled services could see extended wait times applying endpoint changes to cluster IP addresses. Iptables access is now better coordinated and synchronization of firewall rules occurs in less time. (BZ#1734009)
- Egress IP addresses did not operate correctly in namespaces with restrictive NetworkPolicies. Pods that accepted traffic only from specific sources would not be able to send egress traffic via egress IP addresses because the response from the external server would be mistakenly rejected by their NetworkPolicies. Now, replies from egress traffic are correctly recognized as replies rather than as new connections. (BZ#1741477)
- Metrics-server-certs did not remove secrets if the server was uninstalled. The metrics serving cert label has been corrected and metrics serving certs are removed completely. (BZ#1746212)
- Outgoing connections would sometimes be dropped if a minimum kernel version was not installed. A check has been added to ensure that the installed kernel meets the required minimum version to avoid network issues. This check is run during prerequisites, scale-up, and upgrade. (BZ#1749024)
- Upgrade playbooks were not respecting the openshift_docker_additional_registries variable. The registries.conf has been updated to observe inventory variables that have been set or changes since the last upgrade. (BZ#1749341).146, - 1432875 - Unable to deploy postgresql
- BZ - 1641647 - kuryr-controller cannot access OpenShift API LB members with OVN if running on master nodes
- BZ - 1661447 - No way to respect existing PVC when upgrading Metrics with standard playbook
- BZ - 1703777 - sometimes cluster console is not showing projects list when user logged in
- BZ - 1720172 - Openshift-on-OpenStack installation playbook fails when namespace isolation is enabled
- BZ - 1733327 - redeploy-certificates crashes service catalog controllers and fails if metrics-server is installed
- BZ - 1733429 - [3.11 backport]cannot access to the service's externalIP with egressIP in openshift-ovs-multitenant environment
- BZ - 1734009 - [3.11 backport] Synchronization firewall rules takes up to 90min
- BZ - 1735502 - [3.11] Some pods lost default gateway route after restarting docker
- BZ - 1741477 - [3.11] EgressIP doesn't work with NetworkPolicy unless traffic from default project is allowed
- BZ - 1743950 - full chain in custom CA causes controllers to crashloopback
- BZ - 1746212 - [3.11] metrics-server-certs secret does not be removed after metrics-server uninstall due to incorrect label
- BZ - 1748982 - [3.11]redeploy-certificates playbook redeploys service signer certificate - causing internal apps not working
- BZ - 1749024 - [3.11] Ensure minimum kernel version during upgrade
- BZ - 1749341 - During upgrade playbook is not respecting `openshift_docker_additional_registries` varisble
- BZ - 1752853 - There should be more index pattern in kibana
References
(none)
Red Hat OpenShift Container Platform 3.11
Red Hat OpenShift Container Platform for Power 3.11
The Red Hat security contact is secalert@redhat.com. More contact details at. | https://access.redhat.com/errata/RHBA-2019:2816 | CC-MAIN-2021-10 | refinedweb | 669 | 53.71 |
Alrighty, so I'm working on a piece of code for my CS class. I'm fairly new to this, and I'm making some mistake in trying to get this assignment together. Here is the code I'm using...
#include <iostream>
#include <cstring> // I am using the C++ strings library, not character arrays.
using namespace std;
void getName(string);
void giveName(string);
string namesArray[10];
int main(){
int getOption = 0;
int loopValue = 0;
int loopValue2 = 0;
do{
cout << "Please enter a 1 if you want to enter names, a 2 if you wish to find a name, or a 0 if you want to exit." << endl;
do{
cin >> getOption;
if (getOption == 0){
cout << "Goodbye." << endl;
return 0;
}
if (getOption < 0 || getOption > 2){
cout << "Please enter a valid selection." << endl;
loopValue2 = 0;
}
else{
if (getOption == 1){
getName(namesArray[10]);
loopValue2 = 1;
}
else{
giveName(namesArray[10]);
loopValue2 = 1;
}
}
}while(loopValue2 = 0);
} while(loopValue = 0);
}
void getName(string namesArray[]){
string firstName, lastName;
for(int i = 0; i < 10; i++){
cout << "When you are done entering names, please type exit for the next person's first name." << endl;
cout << "Please enter person number " << i++ << "'s first name." << endl;
cin >> firstName;
if (firstName == "exit"){
i = 10;
break;
}
cout << "Please enter person number " << i++ << "'s last name." << endl;
cin >> lastName;
lastName = " " + lastName;
namesArray[i] = firstName + lastName; // Concat to give the full name in 1 part of the array.
}
}
Albeit there are probably thousands of better ways to write this, I'm still new and learning. Now, my problem is in line 35(at the moment) in bold. I'm getting an error I don't know how to make sense of. It says..
"Undefined first referenced
symbol in file
giveName(std::basic_string<char, std::char_traits<char>, std::allocator<char> >)/var/tmp//ccyPQIEC.o
ld: fatal: Symbol referencing errors. No output written to a.out
collect2: ld returned 1 exit status"
I've tried passing the string array using nothing after namesArray, and it tells me that I can't do that by saying..
"error: conversion from `std::string*' to non-scalar type `std::string' requested"
Also, if I use namesArray[], I says...
"error: expected primary-expression before ']' token".
I am at a loss. Can anyone tell me where I am making my mistake(s)?
Update: | http://forums.devx.com/showthread.php?167623-Could-use-a-little-help-Strings-arrays-and-functions&p=503016 | CC-MAIN-2016-18 | refinedweb | 380 | 64.2 |
IRC, freenode, #hurd, 2013-02-26
<youpi> btw, about fakeroot-hurd <youpi> the remaining issue I see is with argv[0] (yes, again...)
IRC, freenode, #hurd, 2013-04-03
<youpi> btw, I believe our fakeroot-hurd is close to working actually <youpi> it's just a argv[0] issue supposed to be fixed by exec_file_name but apparently not fixed in that case, for some reason
IRC, freenode, #hurd, 2013-08-26
< teythoon> also I looked into the fakeroot issue, aiui the problem is that scripts are not handled correctly, right? < teythoon> the exec server fails to locate the scripts file name, and so it hands the file_t to the interpreter process and passes /dev/fds/3 as script name < teythoon> afaics that breaks e.g. python < youpi> yes < youpi> pinotree's exec_file_name is supposed to fix that, but for some reason it doesn't work here
< pinotree> it was pochu's, not mine < youpi> ah, right < teythoon> ah I see, I was wondering about that < pochu> it was working for a long time, wasn't it? < pochu> and only stopped working recently < youpi> did it completely stop? < youpi> I have indeed seen odd issues < youpi> I haven't actually checked whether it has completely stopped working < youpi> probably worth looking there first < pinotree> gtk+3.0 fails, but other stuff like glib2.0 and gtester-using stuff works < teythoon> huh? I created tests like "#!/bin/sh\necho $0" and that says /dev/fd..., and a python script doing the same doesn't even run, so how can it work for a package build? < youpi> it works for me in plain bash < youpi> #!/bin/sh < youpi> echo $0 < youpi> € $PWD/test.sh < youpi> /home/samy/test.sh < teythoon> it does !? < youpi> yes < youpi> not in fakeroot-hurd however, as we said < teythoon> well, obviously it works when not being run under fakeroot-hurd, yes < youpi> ok, so we weren't talking about the same thing < youpi> a mere shell script doesn't work in fakeroot-hurd indeed < youpi> that's why we still use fakeroot-sysv < teythoon> right < youpi> err, -tcp
IRC, freenode, #hurd, 2013-11-18
<teythoon> I believe I figured out the argv[0] issue with fakeroot-hurd <teythoon> but I'm not sure how to fix this <teythoon> first of all, Emilios file_exec_file_name patch set works fine
<teythoon> but not with fakeroot <teythoon> <teythoon> check_hashexec tries to locate the script file using a heuristic <teythoon> Emilios patch improves the situation with just providing this information <teythoon> but then, the identity port of the "discovered" file is compared with the id port of the script file <teythoon> to verify if the heuristic found the right file <teythoon> but when using fakeroot-hurd, /hurd/fakeroot proxies all requests <teythoon> but the exec server is outside of the /hurd/fakeroot environment, so it gets the id port from the real filesystem <teythoon> we could skip that test if the script name is explicitly provided though <teythoon> that test was meant to see whether a search through $PATH turned up the right file <braunr> teythoon: nice <teythoon> braunr: thanks :) <teythoon> unfortunately, dpkg-buildpackaging hurd with it still fails for some reason <teythoon> but it is faster than fakeroot-tcp :) <braunr> even chown ? <braunr> or chmod ? <teythoon> dunno in detail, but the whole build is faster <braunr> if you can try it, i'm interested <braunr> because chown/chmod is also slow on linux with fakeroot-tcp <teythoon> i can try... <braunr> so it's probably not a hurd bug <teythoon> braunr: yes, it really is <braunr> no i mean <braunr> chown/chmod being slow with fakeroot-tcp is probably not a hurd bug <braunr> but a fakeroot-tcp bug <teythoon> chowning all files in /usr/bin takes 5.930s with fakeroot-hurd (6.09 with startup overhead) vs 26.42s (26.59s) with fakeroot-tcp <braunr> but try it on linux (fakeroot-tcp i mean) <braunr> although you may want to do it on something you don't care much about :p)
IRC, freenode, #hurd, 2013-12-03
* teythoon is gonna hunt a fakeroot bug ... <teythoon> % fakeroot-hurd /bin/sh -c ":> /tmp/some_file" <teythoon> /bin/sh: 1: cannot create /tmp/some_file: Is a directory <braunr> ah fakeroot-hurd <teythoon> prevents installing stuff with /bin/install <teythoon> sure fakeroot-hurd, why would i work on the slow one ? <braunr> i don't know <braunr> because it makes chmod/chown/maybe others horrenddously slow <braunr> ? <teythoon> yes, fixing this involves fixing fakeroot-hurd <braunr> are you sure ? <braunr> i prefer repeating just in case: i saw that problem on linux as well <braunr> with fakeroot-sysv <teythoon> so ? <braunr> i'm almost certain it's a pure fakeroot bug, not a hurd bug <braunr> so <teythoon> even if this is fixed, it still has to pay the socket communication overhead <braunr> fixing fakeroot-hurd so that i can be used instead of fakeroot-tcp is a very good thing to do, obviously <braunr> it* <braunr> but it won't solve the chown/chmod speed <braunr> (or, probably won't) <teythoon> huh, why not ? <braunr> 15:53 < braunr> i'm almost certain it's a pure fakeroot bug, not a hurd bug <braunr> when i say it's slow, i should be more precise <braunr> it doesn't show up in top <teythoon> yes, but why would fakeroot-hurd suffer from the same issue ? <braunr> the cpu is almost idle <braunr> oh right, it's a completely different tool <braunr> my bad <braunr> right, right, the proper way to implement fakeroot actually :) <teythoon> yes <teythoon> this will bring near-native speed
IRC, freenode, #hurd, 2013-12-05
<teythoon> fakeroot-hurd just successfully built mig :) <teythoon> hangs in dh_gencontrol when building gnumach or hurd though <teythoon> i believe it hangs waiting for a lock <teythoon> lock like in file lock that is <teythoon> braunr: no more room for vm_map_find_entry in 80220a40 <teythoon> 80220a40 <- is that a task ? <braunr> or a vm_map, not sure <braunr> probably a vm_map
IRC, freenode, #hurd, 2013-12-06
<teythoon> well, aren't threads a source of endless entertainment ... ? <teythoon> well, I found three more bugs in fakeroot-hurd <teythoon> one of them requires fixing the locking used in fakeroot <braunr> ouch <teythoon> the current code does some lock cycling to aquire a lock out of order <braunr> cycling ? <teythoon> in the netfs_node_norefs function <teythoon> release and reaquire <braunr> i see <teythoon> which imho should be better solved with a weak reference <teythoon> working on it, it no longer deadlocks but i broke something else ... <teythoon> endless fun ;) <braunr> such things could have been done right in the beginning <braunr> ... <teythoon> yes, I wonder <teythoon> libports has weak references <teythoon> but pflocal is the only user <braunr> hm <teythoon> none of the lib*fs support that <braunr> didn't i add one in libdiskfs too ? <braunr> anyway, irrelevant <braunr> weak references are a nice feature <braunr> teythoon: i don't see the cycling you mentioned <braunr> only netfs_node_refcnt_lock being dropped temporarily <teythoon> yep, that one <teythoon> line 145 <teythoon> note that due to another bug this code is currently never run <braunr> how surprising .. <braunr> the note about some leak actually gave a hint about that <teythoon> yeah, that leak <teythoon> I think i'm actually very close <teythoon> it's just so frustrating, i thought i got it last night <braunr> good luck then <teythoon> thanks :)
IRC, freenode, #hurd, 2013-12-09
<teythoon> sweet, i fixed fakeroot-hurd :) <braunr> /clap <braunr> what was the problem ? <teythoon> lots <braunr> i see <teythoon> it's amazing it actually run as well as it did <braunr> mess strikes again <braunr> i hate messy code .. * teythoon is building half a hurd package using this ... stay tuned ;) <azeem> teythoon: is this going to make building faster as well? <teythoon> most likely, yes <teythoon> fakeroot-tcp is known to be slow, even on linux <braunr> teythoon: are you sure about the transparent retry patch ? <teythoon> pretty sure, why ? <braunr> it's about a more general issue that we didn't fix yet <braunr> our last discussions about it lead us to agree that clients should check the identity of a server before interacting with it <teythoon> braunr: i don't understand, what's the problem here ? <braunr> teythoon: fakeroot does the lookup itself, doesn't it ? <teythoon> yes <braunr> teythoon: but was that also the case before your patch ? <teythoon> braunr: yes <braunr> teythoon: then ok <braunr> teythoon: i guess fakeroot handles requests only for a specific set of calls right ? <braunr> and for others, requests are directly relayed <teythoon> braunr: yes <braunr> and that still is the case, right ? <teythoon> yes <braunr> ok <braunr> looks right since it only affects lookups <braunr> ok then <teythoon> well, fakeroot-hurd built half a hurd package in less than 70 minutes <teythoon> a new record for my box <braunr> compared to how much before ? <braunr> (and why half of it ?) <teythoon> unfortunately it hung after signing the packages... some perl process with a /usr/bin/tee child <teythoon> killing tee made it succeed though <teythoon> braunr: i don't build the udeb package <braunr> oh ok <teythoon> braunr: compared with ~75 with fakeroot-tcp and my demuxer rework, ~80 before <braunr> teythoon: nice
IRC, freenode, #hurd, 2013-12-18
<teythoon> there, i fixed the last fakeroot-hurd bug <teythoon> *whee* :) <teythoon> i thought so many times that i got the last fakeroot bug ... <teythoon> last as in it's in a good enough shape to compile the hurd package that is <teythoon> but now it is <braunr> :) <braunr> this will make glibc and others so much faster to build
IRC, freenode, #hurd, 2013-12-19
<braunr> teythoon_: hum, you should make the behaviour of fakeroot-hurd on the last client exiting optional <teythoon_> y? <teythoon_> fakeroot-tcp does the very same thing <braunr> fakeroot-hurd is different <braunr> it's part of the file system <teythoon_> yes <braunr> users may want it to stay around <braunr> and reuse it without checking it's actually there <teythoon_> but once the last client is gone, who is ever getting another port to it ? <teythoon_> no <teythoon_> that cannot happen <braunr> really ? <teythoon_> yes <braunr> i thought it was like remap <braunr> since remap is based on it <teythoon_> the same thing applies to remap <teythoon_> only settrans has the control port <braunr> hum <teythoon_> and uses it once to get a protid for the working dir of the initial process started inside the chrooted environment <braunr> you may not want to chroot inside <teythoon_> so ? <teythoon_> then, you get another protid <braunr> i'll make an example <braunr> i create a myroot directory implemented by fakeroot <braunr> populate it <braunr> leave and do something else, <braunr> i might want to return to it later <teythoon_> ah <teythoon_> ok, so you are not using settrans --chroot
<braunr> or maybe i'm confusing the fakeroot translator and fakeroot-hurd <braunr> 10:48 < braunr> you may not want to chroot inside <braunr> yes <teythoon_> hm <teythoon_> ok, so the patch could be changed to check whether the last control port is gone too <braunr> i have no idea of any practical use, but i don't see a valid reason to make a translator go away just because it has no client <braunr> except for resource usage <braunr> and if it's installed as a passive translator <braunr> although that would make fakeroot loose its state <braunr> though remap state is on the command line so it would be fine for it <braunr> see what i mean ? <teythoon_> yes i do <braunr> fakeroot state could be saved in some db one day so it may apply, if anyone feels the need <teythoon_> so what about checking for control ports too ? <braunr> i'm not too familiar with those <braunr> who has the control port of a passive translator ? the parent ? <teythoon_> that should cover the use case you described <teythoon_> for the parent translator <teythoon_> for fsys_getroot requests it has to keep it around <teythoon_> and for more fsys stuff too <braunr> and if active ? settrans ? who just looses it ? <teythoon_> if settrans is used to start an active translator, the parent fs still gets a right to the control port <braunr> ok <braunr> i don't have a clear view of what this implies for fakeroot-hurd <braunr> we'd want fakeroot-hurd to clean all resources including the fakeroot translator on exit <teythoon_> for fakeroot-hurd (or any child translator) this means that a port from the control port class will still exists <teythoon_> so we do not exit <teythoon_> oh, you're speaking of fakeroot.sh ? the wrapper script ? <braunr> probably <braunr> for me, fakeroot-hurd is the command line too, similar to fakeroot-sysv and fakeroot-tcp <braunr> and fakeroot is the translator <teythoon_> yes, agreed <teythoon_> fakeroot-hurd could use settrans --force --chroot ... to force fakeroot to exit if the main chrooted process dies <teythoon_> but that'd kill anything that outlives that process <teythoon_> that might be legitimate, say a process daemonized <teythoon_> so detecting that noone uses fakeroot is the much cleaner solution <braunr> ok <teythoon_> also, that's what fakeroot-tcp does <braunr> which is why i suggested an option for that <teythoon_> why add an option if we can do the right thing without troubling the user ? <braunr> ah, if we can, good <teythoon_> i think we can <teythoon_> I'll rework the patch, thanks for the hint <braunr> so <braunr> just to be clear <braunr> the way you intend it to work is <braunr> wait for all clients and the control port to drop before shutting down <braunr> the control port is dropped when dettaching the translator, right ? <teythoon_> yes <braunr> but hm <braunr> what if clients spawn other processes ? <braunr> they won't find the translator any more <teythoon_> then, that client get's a port to fakeroot at least for it's working dir <teythoon_> so another protid is created <braunr> ah yes, it's usually choorted for such uses <braunr> chrooted <teythoon_> so fakeroot will stick around <braunr> but clients, even from fakeroot, might simply use absolute paths <teythoon_> so ? <braunr> in which case they won't find fakeroot <teythoon_> it will hit fakeroots dir_lookup <teythoon_> sure <braunr> how so ? <teythoon_> if the path is absolute, it will trigger a magic retry of some kind <teythoon_> so the client uses it's root dir port <braunr> i thought the lookup would be done straight from the root fs port .. <teythoon_> which points to fakeroot of course <braunr> ah, chrooted again <teythoon_> that's the whole point <braunr> so this implies clients are chrooted <teythoon_> they are <teythoon_> even if you do another chroot <braunr> what i mean is <teythoon_> that root port also points to a fakeroot port <braunr> if we detach the translator, and clients outside the chroot spawn processes, say shell scripts, they won't find the fakeroot tree <braunr> now, i wonder if we want to actually handle that <braunr> i'm just uncomfortable with a translator silently shutting down because it has no client <teythoon_> if fakeroot is detached, how are clients outside the chroot ever supposed to get a handle to files inside the fakerooted env ? <braunr> it makes sense for fakeroot, so the expected behaviours here aer conflicting <braunr> they had those before fakeroot being detached <teythoon_> then fakeroot wouldn't go away <braunr> right <braunr> unless there is a race but i don't think there is <teythoon_> there isn't <teythoon_> i call netfs_shutdown <braunr> clients get the rights before the parent has a chance to terminate <teythoon_> and only shutdown if it doesn't return ebusy <braunr> makes sense <braunr> ok go ahead :) <teythoon_> cool, thanks for the walk-through ;) <braunr> on the other hand .. <braunr> that's a complicated topic left unfinished by the original authors <teythoon_> one of many <braunr> having translators automatically go away when there is no client may be a good feature <braunr> but it only makes sense for passive translators <braunr> and this should be automated <braunr> the lib*fs libraries should be able to handle it <teythoon_> or, we could go for proper persistence instead <braunr> stay around if active, leave after a while when no more clients if passive <braunr> why ? <teythoon_> clean solution <braunr> persistence looks much more expensive to me <teythoon_> other benefits <braunr> i mean <braunr> persistence looks so expensive it doesn't make sense in a general purpose system <teythoon_> sure, we could make our *fs libs handle this smarter at a much lower cost <teythoon_> don't we get a handle to the underlying file ? <braunr> i think we do yes <teythoon_> if that's actually a file and not a directory, we could store data into it <braunr> many translators are read-only <teythoon_> so ? <braunr> well, when we can write, we can use passive translators instead <braunr> normally <teythoon_> yes <braunr> depends on the fs type actually but you're right, we could use regular files <braunr> or a special type of file, i don't know <antrik> braunr: BTW, I agree that active translators should only go away when no ports are open anymore, while passive ones can exit when control ports are still open but no protids <teythoon> antrik: you mean as a general rule ? <teythoon> that leaves the question how the translator distinguishes between having a passive translator record and not having one <antrik> I believe I already arrived at that conclusion in some design discussion, probaly regarding namespace-based translator selection <antrik> teythoon: yeah, as a general rule <teythoon> interesting <antrik> currently there are command line arguments controling timeouts, but they don't consider control ports IIRC <teythoon> i thought there are problems with shutting down translators in general <antrik> (also, command line arguments seem inconvenient to distinguish the passive vs. active case...) <teythoon> yeah, but we disregard the timeouts in the debian flavor of hurd <antrik> teythoon: err... no we don't. at least not last time I knew. are you confusing this with thread timeouts? <antrik> simple test: do ls -l on /dev, wait a few minutes, compare <teythoon> what do you expect will happen ? <antrik> the unused translators should go away <teythoon> no <antrik> that must be new then <teythoon> might be, yes <teythoon> <braunr> antrik: debian currently disables both the global and thread timeouts in libports <braunr> my work on thread destruction consists in part in reenabling thread timeouts, and my binary packages do that well so far :) <antrik> braunr: any idea why the global timeouts were disabled?
IRC, freenode, #hurd, 2013-12-20
<braunr> antrik: not sure <braunr> but i suspect there could be races <braunr> if a message arrives while the server is going away, i'm not sure the client can determine this and retry transparently <antrik> good point... not sure how that is supposed to work exactly
IRC, freenode, #hurd, 2013-12-31
<braunr> btw, we should remove the libports_stability patch and directly change the upstream code <braunr> if you agree, i can force the global timeout to 0 (because we're still not sure what can go wrong when a translator goes away while a message is being delivered to it) <braunr> i didn't experience any slowdown with thread destruction however <braunr> so i'm tempted to set that to an actual reasonable timeout value of 30-300 seconds <teythoon> braunr: if you do, please introduce a macro for the default value so it can be changed easily <braunr> teythoon: yes <braunr> i don't understand why these are left as parameters tbh <teythoon> true <braunr> 30 seconds seems to be plenty enough
IRC, freenode, #hurd, 2014-01-17
<braunr> time to give fakeroot-hurd a shot <braunr> <teythoon> braunr: (wrt fakeroot-hurd) well in my book that shouldn't happen <teythoon> that's why i put the assertion there ;) <braunr> i assumed so :) <teythoon> then again, /me does not agree with "threads" as concurrency model >,<, and that feeling seems to be mutual :p <braunr> ? <teythoon> well, obviously, the threads do not agree with me wrt to that assertion <braunr> the threads ? <teythoon> well, fakeroot is a multithreaded server <braunr> teythoon: i'm not sure i get the point, are you saying you're not comfortable with threads ? <teythoon> that's exactly what i'm saying <braunr> ok <braunr> coroutines/functional i guess ? <teythoon> csp <teythoon> functional not so much
IRC, freenode, #hurd, 2014-01-20
libpthread, fix have kernel resources.
<braunr> teythoon: it's perfectly possible that the bug i had with fakeroot-hurd have been caused by my own glibc thread related patches <braunr> has* <teythoon> ok <teythoon> *phew* :p <braunr> :) <teythoon> i wonder if youpi could reproduce his issue on his machine <braunr> what issue ? <braunr> i must have missed something <teythoon> some package failed <teythoon> but he didn't gave any details <teythoon> he wanted to try it on his vm first <braunr> ok
IRC, freenode, #hurd, 2014-01-21
<braunr> teythoon: i still get the same assertion failure with fakeroot-hurd <braunr> will take a look at that sometimes too <teythoon> braunr: hrm :/ <braunr> teythoon: don't worry, i'm sure it's nothing big <braunr> in the mean time, there are updated hurd and glibc packages on my repository with fixed tls and thread destruction <teythoon> cool :)
IRC, freenode, #hurd, 2014-01-23
<braunr> teythoon: can you briefly explain this fake reference thing in fakeroot when you have some time please ? <teythoon> braunr: fakeroot creates ports to hand out to clients <teythoon> every port represents a node and references a real node <teythoon> fakeroot allows one to set attributes, e.g. file permissions on any node as if the client was root <teythoon> those faked attributes are stored in the node objects <braunr> let's focus on fake_reference please <teythoon> once some attribute is faked, that node has to be kept alive <teythoon> otherwise, that faked information is lost <teythoon> so if the last peropen object is closed and some information is faked, a fake reference is kept <teythoon> as indicated by a flag <braunr> hm <teythoon> in dir lookup, if a node is looked-up that has a fake reference, it is recycled, i.e. the flag cleared and the referecne count is not incremented <teythoon> so every time fakeroot_netfs_release_protid is called b/c, the node in question should not have the fake reference flag set <braunr> what's the relation between the number of hard links and this fake reference ? <teythoon> i don' <teythoon> i don't think fakeroot has a notion of 'hard links' <braunr> it does <braunr> the fake reference is added on nodes with a hard link count greater than 0 <braunr> but i guess that just means the underlying node still exists <teythoon> ah yes <teythoon> right <teythoon> currently, if the real node is deleted, the fake node is still kept around <braunr> let's say it's ok for now <teythoon> that's what the comment is talking about, the one that indicates that garbage collection could help here <teythoon> yes <teythoon> properly fixing this is difficult <braunr> agreed <braunr> it would require something like inotify anyway <teythoon> b/c of the way file deletion works <braunr> let's just ignore the issue, that's not what i'm hunting <teythoon> agreed <braunr> the assertion i have is telling us that we're dropping a fake reference <braunr> are we certain this isn't possible ? <teythoon> that function is called if a client dereferences a port <teythoon> in order to have a port in the first place, it has to get it from a dir_lookup <teythoon> the dir lookup turns a fake reference into a real one <teythoon> so i'm certain of that (barring a race condition somewhere) <braunr> ok <braunr> netfs_S_dir_lookup grabs idport_ihash_lock (line 354) but doesn't release it if nn == NULL (lines 388-392) <teythoon> hm, my file numbers are slightly different o_O <braunr> i have printfs around <braunr> sorry :) <teythoon> ok <teythoon> new node unlocks it <teythoon> new_node <braunr> oh <braunr> how unintuitive .. <teythoon> yes, don't blame me ;) that's how it was <braunr> :) <braunr> worse, the description says "if successful" .. <braunr> ah no, the node lock <braunr> ok <teythoon> yes, badly worded description <braunr> i strongly doubt it's a race <teythoon> how do you trigger that assertion failure ? <braunr> dpkg-buildpackage -rfakeroot-hurd -uc -us <braunr> for the hurd package <braunr> very similar to one of your test cases i think <teythoon> umm :-/ <braunr> one thing that i find confusing is that fake_reference seems to apply to nodes, whereas release_protid is about, well, protids <braunr> is there a 1:1 relationship ? <braunr> since there is a peropen in the protid, i assume not <braunr> it may be a race actually <braunr> np->references must be accessed with netfs_node_refcnt_lock locked <braunr> hm no, that's not it <teythoon> no, it's not a 1:1 relationship <teythoon> note that the lock idport_ihash_lock serializes most operations, despite it's name indicating that it's only for the hash table <teythoon> the "interesting" operations being dir_lookup and release_protid <braunr> yes <braunr> again, that's another issue <teythoon> why ? that's a pretty strong guarantee already <braunr> ah yes, i was referring to scalability <teythoon> sure <braunr> the assertion is triggered from ports_port_deref in ports_manage_port_operations_multithread <teythoon> but i found it hard to reason about fakeroot, there are multiple locks involved, two kinds of reference counting across different libs <braunr> yes <teythoon> yes, that's to be expected <braunr> teythoon: do we agree that the fake reference is reused by a protid ? <teythoon> braunr: yes <braunr> why is there a ref counter for the protid as well as the peropen then ? :/ <teythoon> funny... i thought there was no refcnt for the peropen objects, but there is <teythoon> but for fakeroot-hurd that shouldn't matter, right ? <braunr> i don't know <teythoon> here, one protid object is associated with one peropen object <braunr> yes <teythoon> and the other way around, i.e. it's 1:1 <teythoon> so the refcount for those should be identical <braunr> but i get a case where protid has a refcnt of 0 while the peropen has 2 .. <teythoon> umm, that doesn't sound right <braunr> teythoon: ok, it does look like a race on np->references <braunr> node references are protected by a global lock in lib*fs libs <teythoon> yes <braunr> you check it without holding it <braunr> which means another protid can be closed at the same time, setting the flag on the underlying node <braunr> i'll make a proper patch soon <teythoon> they cannot both hold the hash lock <braunr> hm <braunr> teythoon: actually, i don't see why that's relevant <braunr> one thread closes its protid, sets the fakeref flag <braunr> the other does the same, chokes on the assertion <braunr> serially <teythoon> i'm always a little fuzzy when exactly the references get decremented <teythoon> but shouldn't only the second thread set the fakeref flag ? <braunr> well, that's not what i see <braunr> i'll check what happens to this ref counter <teythoon> see how my release_protid function calls netfs_release_protid just after the out label <teythoon> *while holding the big hash lock <teythoon> so, any refcounting should happen while the lock is being held, no ? <braunr> perhaps <braunr> now, my logs show something new <braunr> a case where the protid being released was never printed before <braunr> i.e. not obtained from dir_lookup <braunr> or at least, not fakeroot dir_lookup <teythoon> huh, where did it came from then ? <braunr> no idea <teythoon> only dir_lookup hands out those <braunr> check_openmodes calls dir_lookup too <teythoon> yes, but that's not our dir_lookup <braunr> that's what i mean <braunr> it bypasses fakeroot's custom dir_lookup <braunr> but i guess the reference already exists at this point <teythoon> bypass ? i wouldn't call it that <braunr> you're right, wrong wording <teythoon> it accesses files on other translators <braunr> yes <braunr> the netnode is already present <teythoon> yes <braunr> could it be the root node ? <teythoon> i do not believe so <teythoon> the root node is always faked <teythoon> and is handed out to the first process in the fakeroot env for it's current directory port <teythoon> so you could try something that chdirs away to test that hypothesis <braunr> the assertion looks triggered by a chdir <teythoon> how do you know that ? <braunr> dh_auto_install: error: unable to chdir to build-deb <teythoon> ah <teythoon> well, or that is just the operation after fakeroot died and completely unrelated <braunr> maybe <teythoon> can you trigger this reliably ? <braunr> yes <braunr> i'm trying to write a shell script for that <teythoon> so for you, fakeroot-hurd never succeeded in building a hurd package ? <braunr> no <teythoon> on darnassus ? <braunr> yes <teythoon> b/c i stopped working on fakeroot-hurd when it was in a good-enough shape to build the hurd package <teythoon> >,< <teythoon> maybe my system is not fast enough to hit this race (if it turns out to be one) <braunr> some calls seems to decrease the refcount of the root node <braunr> call* <teythoon> have you confirmed that it's the root node ? <braunr> almost <braunr> i could say yes <braunr> teythoon: actually no, it's not .. <braunr> could be .. <braunr> teythoon: on what node does fakeroot-hurd install the fakeroot translator when used to build debian packages ? <braunr> hum <braunr> could it simply be that the check on np->references should be moved above the assertion ? <teythoon> braunr: it is not bound to any node, check settrans --chroot
<braunr> oh right <braunr> teythoon: ok i mean <braunr> does it shadow / ? <braunr> looks very likely, otherwise the chroot wouldn't work <teythoon> i'm not sure what you mean by shadow <braunr> settrans --chroot cmd -- / /hurd/fakeroot ? <teythoon> but yes, for any process in the chroot-like env every real node is replaced, including / <braunr> makes sense <braunr> teythoon: moving the assertion seems to fix the issue <braunr> intuitively, it seems reasonable to assume the fakeref flag can only be set when there is only one reference, namely the fake reference <braunr> (well, the fake ref, recycled by the last open) <teythoon> no, i don't follow <teythoon> i'd still say, that if ...release_protid is called, then there is no way for the fake flag to be set in the first place <teythoon> that's why i put the assertion in ;) <braunr> on the other hand, you check the refcnt precisely because other threads may have reacquired the node <teythoon> but why would moving the assertion change anything ? <teythoon> if we would do that, we'd "lose" all threads that see np->reference being >1 <teythoon> but for those objects the fake_reference flag should never be set anyways <teythoon> i cannot see why this would help <teythoon> (does it help ?) <teythoon> (and if it does, it points to a serious problem imho) <braunr> i'm recreating the traces that made me think that <braunr> to get a clearer view of what's happening <braunr> the problem i have with the current code is this <braunr> there can be multiple protid referring to the same node occurring at the same time <braunr> they are serialized by the hash table lock, ok <braunr> but there apparently are cases where the first (of two) protids being closed sets the fakeref flag <braunr> and the following chokes because the flag is set <braunr> i assume you put this refcount check because you assumed only the last protid being closed can set the flag, right ? <braunr> but then, why > 1 ? why not > 0 ? <teythoon> yes, that's what i was trying to assert <teythoon> b/c the 1 is our reference <braunr> which one exactly ? <teythoon> >1 is anyone *beside* us <teythoon> ? <braunr> hm <braunr> you mean the reference held by the protid being destroyed <teythoon> yes <braunr> isn't that reference already dropped before calling the cleanup function ? <braunr> ah no, it's the node ref <teythoon> yes <braunr> released by netfs_release_protid <teythoon> exactly <braunr> which is called without the hash table lock held <braunr> hm no <braunr> it's locked <braunr> damn my brain is slow today <teythoon> i actually think that it's the combination of manual reference counting and the primitive concurrency model that makes it hard to reason about this <braunr> well <braunr> the model is quite simple too <braunr> accesses to refcounters must be protected by the appropriate lock <braunr> this isn't done here, on the assumption that all referencing operations are protected by another global lock all the time <teythoon> even if a model is simple, this does not mean that it is a good model for human beings to comprehend and reason about <braunr> i don't know <braunr> note that netfs_drop_node is designed to be called with netfs_node_refcnt_lock locked <braunr> implying the refcount must remain stable between checking it and dropping the node <braunr> netfs_make_peropen is called without the hash table lock held in dir_lookup <braunr> and this increases the refcount <braunr> although the problem is rather that something decreases it without the lock held <teythoon> we should port libtsan and just ask gcc -fsanitize=thread <braunr> what about the netfs_nput call at the end of dir_lookup ? <braunr> the fake ref should be set by the norefs function <teythoon> that should not decrease the count to 0 b/c the caller holds a reference too <braunr> yes that's ugly <braunr> ugh <braunr> i'm unable to think clearly right now <teythoon> as mentioned in the commit message, you cannot do something like this in the norefs function <teythoon> bbl ;) <braunr> bye teythoon <braunr> thanks for your time <braunr> for when you come back : <braunr> instead of maintaining this "fake" reference, why not assumeing the hash table holds a reference, and simply count it <braunr> the same way a cache does <braunr> and drop that reference when removing a node, either to reflect the current state of the underlying node, or because the translator is being shut down ? <braunr> why not assume* <braunr> bbl too <teythoon> sure, refactoring is definitively an option
IRC, freenode, #hurd, 2014-01-24
<braunr> teythoon: ok, i'll take care of fakeroot <teythoon> braunr: thanks. tbh i was a little fed up with that little bugger >,< <braunr> i can imagine <braunr> considering the number of patches you've sent already <braunr> teythoon: are you sure about your call to fshelp_lock_init ? <teythoon> yes, why do you ask ? <teythoon> (the test case is given in the commit message) <braunr> it doesn't look right to me to call "init" while the node is potentially locked <braunr> i noticed libdiskfs peropen release function takes care of releasing locks <braunr> it looks better to me <teythoon> it's not about releasing the lock <teythoon> it's about faking the file being closed which implicitly releases the lock <braunr> the file is being close <braunr> closed <braunr> since it's in the cleanup function <teythoon> yes, but we keep it b/c the file has faked attributes <teythoon> did you look at the problem description in the commit message ? <braunr> we keep the node <braunr> not the peropen <teythoon> so ? <teythoon> the lock is in the node <braunr> why would libdiskfs do it in the peropen release then ? <braunr> there is an inconsistency somwhere <braunr> actually, the lock looks to be per open <braunr> or rather, the lock is per node, but its status is recorded per open <braunr> allowing the implementation to track if a file descriptor was used to install a lock and release it when that file descriptor goes away <teythoon> why would the node be locked ? <teythoon> locked in what way, file-locking locked ? <braunr> yes <braunr> posix explicitely says that file locks must be implicitely removed when closing the file descriptor used to install them, so that makes sense <teythoon> isn't hat exactly what i'm doing ? <braunr> no <braunr> you're initializing the file lock <braunr> init != unlock <braunr> and it's specific to fakeroot, while it looks like libnetfs should be doing it <teythoon> libnetfs would do it <teythoon> but we prevent that by keeping the node alive <braunr> again, it's a per open thing <braunr> and no, libnetfs doesn't release locks implicitely in the current version <teythoon> didn't we agree that for fakeroot one peropen object is associated with one protid object ? <braunr> yes <braunr> and don't keep those alive <braunr> so let them die peacefully, and fix libnetfs so it releases the lock as it's supposed to <braunr> and we* don't <teythoon> we don't keep those alive <teythoon> why would we ? <braunr> yes that's what i wanted to say <braunr> what i mean is <braunr> since letting peropens die is already what is being done <braunr> there is no need for a special handling of locks in fakeroot <teythoon> oh <braunr> on the other hand, libnetfs must be fixed <teythoon> ok, that might very well be true <teythoon> (we need to bring libnetfs and diskfs closer so that they can be diff'ed easily) <braunr> i just wanted to check your reason for using lock_init in the first place <braunr> yes .. <braunr> teythoon: also, i think we actually do have what's necessary to deal with garbage collection <braunr> namely, dead-name notifications <braunr> i'll see if i can cook something simple enough <braunr> otherwise, merely keeping every node around is also acceptable considering the use cases <teythoon> dead-name notifications won't help if the real node disappears, no ? <braunr> teythoon: dead name notifications on the real node port :) <braunr> teythoon: at least i can reliably build the hurd package using fakeroot-hurd now <braunr> let's try glibc :)
IRC, freenode, #hurd, 2014-01-25
<teythoon> braunr: awesome :) <braunr> teythoon: hm not sure :/ <braunr> darnassus got oom <braunr> teythoon: could be unrelated though <braunr> teythoon: something has apprently made /home unresponsive :( <braunr> teythoon: i suspect bots hitting apache and in particular the git repositories to have increased memory usage
IRC, freenode, #hurd, 2014-01-26
<braunr> teythoon: btw, fakeroot interacts very very badly with other netfs file systems <braunr> e.g., listing /proc through it creates lots of nodes <braunr> i'm not yet sure how to fix that <braunr> using a dead name notification doesn't seem appropriate (at least not directly) because fakeroot holds a true reference that prevents the deallocation of the target node
IRC, freenode, #hurd, 2014-01-27
<braunr> teythoon: good news (more or less): fakeroot is actually leaking a lot when crossing file systems <braunr> which means if i fix that, there is a good chance we can use it to build all packages with it <braunr> -with it <teythoon> what do you mean exactly ? <braunr> if target nodes are from /, there is no such leak <braunr> as soon as the target nodes are from another file system, ports rights are leaked <braunr> that's what fills the kernel allocator actually <teythoon> oh, so dir_lookup leaks ports when crossing translator boundaries ? <braunr> seems so <teythoon> yeah, that might very well be it <teythoon> the dir_lookup logic in lib*fs is quite involved :/ <braunr> yes, my simple attempts were unsuccessful <braunr> but i'm confident i can fix it soon <teythoon> that sounds good :) <braunr> i also remove the fake_ref flag and replace it with "accounting the reference in the hash table" as soon as a node is faked <teythoon> fine with me <braunr> these will be the expected leak <braunr> but they're far less in numbers than what i observe <braunr> and garbage collection can be implemented later <braunr> although i would prefer notifications a lot more <braunr> end of the news, bbl :) <braunr> found it :> <teythoon> braunr: -v ;) <braunr> err = dir_lookup (...); <braunr> if (dir != dnp->nn->file) mach_port_deallocate (mach_task_self (), dir); <braunr> in other words, deallocate ports for intermediate file system root directories .. :) <braunr> teythoon: currently building hurd and glibc packages <braunr> but i intend to improve some more with the addition of a default faked state <braunr> so that only nodes with modified faked states are retained <teythoon> how do you mark nodes as having the default faked state ? <braunr> i don't <teythoon> ok, right, makes sense :) <teythoon> this sounds awesome, thanks for following up on this <braunr> i'm quite busy with other stuff so, with proper testing, it should take me the week to get merged <braunr> teythoon: well thanks for all the fixes you've done <braunr> fakeroot was completely unusable before that <teythoon> if you push your changes somewhere i'll integrate them into my packages and test them <braunr> ok <braunr> implementing fakeroot -u could also be a good thing <braunr> and this should work easily with that default faked state strategy
IRC, freenode, #hurd, 2014-01-28
<braunr> teythoon: i should be able to test fakeroot-hurd with the default faked attributes strategy today on glibc <teythoon> braunr: very nice :) <braunr> azeem_: do you happen to know if fakeroot -u is used by debian ? <braunr> i mean when building packages <teythoon> braunr: how does fakeroot-hurd perform on darnassus ? <teythoon> i mean, does it yield a noticeable improvement over fakeroot-tcp just like on my slow box ? <braunr> i'm not measuring that :/ <teythoon> ok, no problem <braunr> and since nodes are removed from the hash table, performance might decrease slightly <braunr> but the number of rights is kept very low, as expected <teythoon> that's good <braunr> i keep seeing leaks though <braunr> when switching cwd between file systems <teythoon> humm <braunr> so i assume something is wrong with the identity of . or .. <braunr> it's so insignificant compared to the previous problems that i won't waste time on that <braunr> teythoon: the problem with measuring on darnassus is that it's a public machine <teythoon> right <braunr> often scanned by ssh worms or http bots
cannot create dev null interrupted system call.
<braunr> but it makes complete sense to get better performance with fakeroot-hurd <braunr> that's actually one of the reasons i'm working on it <braunr> if not the main one <teythoon> :) <teythoon> that was my motivation too <braunr> it shows how you can get an interchangeable unix tool that directly plugs well with the low level system <braunr> and make it work better <teythoon> nicely put :) <braunr> teythoon: i still can't manage to build glibc with fakeroot-hurd <braunr> but i'm not sure why :/ <braunr> there was no kernel memory exhaustion this time <teythoon> :/ <braunr> cp: cannot create regular file `debian/libc-bin.dirs': Permission denied <braunr> hum <braunr> youpi: do you know if building debian packages requires fakeroot -u option ? <youpi> I don't know <gg0> braunr: man dpkg-buildpackage says it just runs "fakeroot debian/rules <target>" <gg0> sources confirm that <braunr> gg0: ok
IRC, freenode, #hurd, 2014-01-29
<braunr> it seems that something sets the permissions of this debian/libc-bin.dirs file to 000 ... <teythoon> i've seen this too <braunr> oh <braunr> do you think it's a fakeroot-hurd bug ? <teythoon> have i mentioned something like this in a commit message ? <teythoon> yes <teythoon> it is <braunr> ok <braunr> i didn't see any mention of it <braunr> but i could have missed it <teythoon> hm, i cannot recall it either <teythoon> but i've seen this issue with fakeroot-hurd <braunr> ok <braunr> it's probably the last issue to fix to get it to work for our packages <braunr> teythoon: i think i have a solution for that last mode bug <braunr> fakeroot doesn't relay chmod requests, unless they change an executable bit <braunr> i don't see the point, and simply removed that condition to relay any chmod request <teythoon> braunr: did it work ? <braunr> no <braunr> fakeroot still consumes too many ports <braunr> and for each file, there are at least two ports, the faked one, and the real one <braunr> it should be completely reworked <braunr> but i don't have time to do that <braunr> i'll see if it works when building from scratch <braunr> actually, it's not even a quantity problem but a fragmentation problem <braunr> the function that fails is kmem_realloc .. <braunr> ipc spaces are arrays in kernel space .... <teythoon> it's more like three ports per file, you forgot the identity port <braunr> ah yes
IRC, freenode, #hurd, 2014-02-03
<braunr> teythoon: i'll commit my changes on fakeroot tonight <braunr> they do improve the tool, but not enough to build glibc with it <teythoon> braunr: cool :), so how do we make it fully usable ? <braunr> teythoon: i don't know .. <braunr> i'll try re adding detection of nodes with no hard links for one <braunr> but imho, it needs a rework based on what the real fakeroot does <braunr> i won't work on it though <braunr> teythoon: also, it looks like i've tested building glibc with a wrong test binary of my fakeroot version :/ <braunr> so consider all test results irrelevant so far
IRC, freenode, #hurd, 2014-02-04
<braunr> fakeroot-hurd might turn out to be easily usable for our debian packages with the fixed binary :) <braunr> teythoon: hum, can you explain 672005782e57e049c7c8f4d6d0b2a80c0df512b4 (trans: fix locking issue in fakeroot) when you have time please ? <braunr> it looks like it introduces a deadlock by calling new_node (which acquires the hash table lock) while dir is locked, violating the hash table -> node locking order <teythoon> braunr: awesome, then there still is hope for fakeroot-hurd :) <braunr> teythoon: i've been able to build glibc packages several times this night <braunr> so except for this deadlock i've seen once, it looks good <teythoon> right <teythoon> that deadlock <teythoon> right, it does indeed violate the partial order of the locks :-/ <braunr> teythoon: can you explain why you moved the lock in attempt_mkfile please ? <braunr> teythoon: i've just tested a fakeroot binary without the patch introducing the deadlock, and glibc built without any problem <teythoon> braunr: well, this is very good news :) <braunr> teythoon: but i still wonder why you made this patch in the first place, i don't want to revert it blindly and reintroduce a potential regression <teythoon> braunr: i thought i was fixing the order in which locks were taken. if the commit message does not specify that it fixes an issue, then i was probably just wrong and you can revert it <braunr> oh ok <braunr> good <braunr> teythoon: another successful build :) <braunr> i'll commit my changes <teythoon> awesome :) <braunr> there might still be concurrency issues but it's much better <teythoon> i'm curious what you did :) <braunr> so little :) <braunr> i was sick all week heh <braunr> you'll se <braunr> see <teythoon> well, that's good actually ;) <braunr> yes <braunr> teythoon: actually there was another debugging line left over, and again, my test results are irrelevant @#!
IRC, freenode, #hurd, 2014-02-05
<braunr> teythoon: i got an assertion about nn->np->nn not being equal to nn atfer the hash table lookup is dir_lookup <braunr> +failure <teythoon> that's bad <braunr> not over yet <teythoon> i had a couple of those too <teythoon> i guess it's a use after free <braunr> yes <teythoon> i used to poison the pointers and comment out the frees to track them down iirc <braunr> teythoon: one of your patches stores netnodes instead of nodes in the hash table, citing some overwriting issue <braunr> teythoon: i don't understand why using netnodes fixes this <teythoon> braunr: libihash has this cookie for fast deletes <teythoon> that has to be stored somewhere <teythoon> the node structure has no room for it <braunr> uh <teythoon> yes <teythoon> it was that bad <braunr> ... <teythoon> hence the uglyish back pointers <braunr> i see <teythoon> looking back i cannot even say why it worked at all <braunr> well, it didn't <teythoon> i believe libihash must have destroyed a linked list in the node struct <braunr> possibly <teythoon> no, it did not >,<, but for simple tests it kind of did <braunr> yes fakeroot sometimes corrupts memory badly .... <braunr> and yes, turns out the assertion is triggered on nodes with 0 refs .. <braunr> teythoon: it looks like even the current version makes wrong usage of the ihash interface <braunr> locp_offset is defined as "The offset of the location pointer from the hash value" <braunr> and indeed, it's an intptr_t <braunr> teythoon: hm no, it looks ok actually, forget what i said :) <teythoon> *phew <teythoon> :p <braunr> hmm, still occasional double frees in fakeroot, but it looks in good shape for single threaded tasks like package building <braunr> teythoon: i've just sent my fakeroot patches <teythoon> braunr: sweet, i'll have a closer look tomorrow :) <braunr> teythoon: i couldn't debug the double frees though :/
IRC, freenode, #hurd, 2014-02-06
<braunr> btw, i'm able to successfully use fakeroot-hurd to build glibc packages, but is there a way to make sure the resulting archives contain the right privileges and ownerships ? <youpi> I don't remember whether debdiff checks permissions <youpi> braunr: I've just got fakeroot-hurd debian/rules clean <youpi> dh_clean <youpi> fakeroot: ../../trans/fakeroot.c:161: netfs_node_norefs: Assertion `np->nn->np == np' failed. <youpi> while building eglibc <teythoon> youpi: yes, that lockup is most annoying... :/ <braunr> youpi: with the new version ? <youpi> yes <braunr> hum <braunr> i only had rare double frees, not that any more :/ <braunr> youpi: ok i got the error too <braunr> still not good enough <youpi> ok
IRC, freenode, #hurd, 2014-02-07
<braunr> youpi: debdiff seems to handle permissions <braunr> i've found the cause of the assertions <youpi> braunr: groovie :)
IRC, freenode, #hurd, 2014-02-08
<teythoon> braunr: nice :) <braunr>
IRC, freenode, #hurd, 2014-02-10
<braunr> and, on a completely different topic, here is a crash i can reproduce when using fakeroot:
IRC, freenode, #hurd, 2014-02-11
<braunr> still working on fakeroot <braunr> there are still races (not disturbing for package building but still ..) <braunr> there may be wrong right handling <teythoon> i believe i have witnessed a fakeroot deadlock :/ <braunr> aw <teythoon> not sure though, buildbot killed the build process before i could investigate <braunr> teythoon: was it a big package ? <teythoon> half of the hurd package <braunr> that's not a port right overflow then
IRC, freenode, #hurd, 2014-03-05
<teythoon> youpi: what about the exec_filename patch series? even though fakeroot still has some issues (has it?), i consider it worthy for inclusion
<youpi> Roland was disagreeing with it <youpi> iirc the fakeroot issue was solved <teythoon> braunr: ^ <braunr> fakeroot goot a lot more robust than it used to be <braunr> but i haven't checked that it actually behaves exactly like the library for corner cases <braunr> there are minor differences <braunr> also, it seems to trigger concurrency bugs in ext2fs <braunr> e.g. git reporting that files either "already exist" or "can't be found" <braunr> it happens (rarely) when directly using ext2 <braunr> and more often through fakeroot <braunr> i didn't take the time to investigate | http://www.gnu.org/software/hurd/open_issues/virtualization/fakeroot.html | CC-MAIN-2015-06 | refinedweb | 8,554 | 52.7 |
This library is a revised version of an excellent script released in 2006, adjusted with modern MQL possibilities in mind. Some features have been removed, and new features have been added.
In both platforms (MetaTrader 4/5), you can run the following script to save a report (in the form of MetaTrader 4):
#include <Report.mqh> void OnStart() { REPORT::ToFile("Report.htm"); }
and see a generated HTML report in the resulting file:
It can be especially useful in MetaTrader 5, which does not provide visual HTML reports like MetaTrader 4 (as of the time of publication).
Output of balance graph on the chart.
#include <Report.mqh> void OnStart() { REPORT::ToChart(); // Output of balance graph on the chart }
Automatic saving of reports in the tester at the end of a single test and during optimization.
At the end of a single run in the tester, the library can automatically save a report on testing results of any Expert Advisor. Add to the EA source code the following lines:
#define REPORT_TESTER // Reports will be automatically written in the tester #include <Report.mqh>
The same lines will enable saving of reports of each optimization run.
It allows you to immediately evaluate results without waiting for the optimization completion. After the end of optimization, you will not need to launch separate tests and wait for results. It allows you to visually evaluate all results calculated by the optimizer. Balance graphs (PNG files) of multiple separate runs can be shown on one chart as thumbnails.
In MetaTrader 5, the library uses the MetaTrader 4Orders library.
To add balance charts and input values to reports in the optimization mode, you need to register the TypeToBytes library.
#include <TypeToBytes.mqh> // #define REPORT_TESTER // Reports will be automatically written in the tester #include <Report.mqh>
Translated from Russian by MetaQuotes Software Corp.
Original code:
An Expert Advisor without a single indicator. Uses lot and step increase.
A channel based on peaks and troughs of AlexSTAL_ZigZagProf.
The script saves current chart settings to a template with the specified name.
Measuring the net performance of MetaTrader 4/5 strategy testers. | https://www.mql5.com/en/code/18801 | CC-MAIN-2018-09 | refinedweb | 349 | 56.55 |
Right now my programs run in msdos prompt. How do I make them run in Windows?
This is a discussion on Programs run in dos. Why not windows? within the C++ Programming forums, part of the General Programming Boards category; Right now my programs run in msdos prompt. How do I make them run in Windows?...
Right now my programs run in msdos prompt. How do I make them run in Windows?
well you have dos programming and windows programming. Theres a forum here for window programming, and heres some tutorials:
make sure you have a firm knowledge of c/c++ before hand.
the best things in life are simple.
Well, I have the code to make a window
;
}
But how do I then put my dos program code into it.
the best things in life are simple.
c++ is c++ isn't it? Is there different variations for windows and dos?
Actually c++ is not c++.
Here, try it:
[CODE]
#include <iostream>
using std::cout;
int main () {
int c = true;
if (c++==c++) cout << "C++ is C++";
if (c++!=c++) cout << "C++ is not C++";
return 0;
}
[/ODE]
LOL!LOL!Originally posted by Imperito
Actually c++ is not c++.
Here, try it:
Code:#include <iostream> using std::cout; int main () { int c = true; if (c++==c++) cout << "C++ is C++"; if (c++!=c++) cout << "C++ is not C++"; return 0; }
But c++ can be c++:
Programmers have the power!Programmers have the power!Code:if (c++==c++) cout << "C++ is not C++"; if (c++!=c++) cout << "C++ is C++";
Regards,
Mario Figueiredo
Using Borland C++ Builder 5
Read the Tao of Programming
This advise was brought to you by the Comitee for a Service Packless World
Yeah well in my example, the strings came from the logic. I could just as easily do this:
Code:{ int a = 5; int b = 7; int c = b - 2; if (a == c) cout << "Pi is just three\n"; if (b != a) cout << "This is not actually a computer, it is a pencil with a piece of lettuce on it\n"; }
As I always say: C++ is the best.
but C++ is not C++ this is up to the programmer.
for your question...
and if you have C++ compiler under windows, please open your file, and write your code again and thin compile it.
and if you have any problem ... just write your error with your code, and I will be glad to help you.
but still C++ is the best.
C++
The best
If you actually took time to read what he said you would understand that he wasnt having errors. He was wondering how to make windows and then how to get his DOS program into the window.
I got it now...
C++
The best
Well bud before you do anything learn how to code a win32 GUI... pure DOS applications (not win32 bit console applications) are written in 16bit code. now, when translating them into a win32 gui, you will loose percision and have errors.
So the fact is learn how to write win32bit GUI applications, with the WINAPI or MFC, then rewrite your DOS code in the new program.
its that simple..
the DOS GUI and the WIN GUI, are just front end code...
so what ever DOS GUI you did is now garbage.
your back end code should be very easibly ported for the WIN32 environment.
Im guessing you are new to C++ on a windows system?
Last edited by Liam Battle; 05-15-2002 at 02:10 PM.
LB0: * Life once school is done
LB1: N <- WakeUp;
LB2: N <- C++_Code;
LB3: N >= Tired : N <- Sleep;
LB4: JMP*-3;
Well, I'm back.
After my brief time of studying windows programming, I gave up.
Now though, I want to program for the Pocket PC, and it's between C++ and Visual Basic. I think I'll choose c++.
Anyway, I'll look at some of the tutorials you mentioned, as knowing Windows Programming is needed to program for the Pocket PC.
Anybody here program for mobile devices? | http://cboard.cprogramming.com/cplusplus-programming/14299-programs-run-dos-why-not-windows.html | CC-MAIN-2014-15 | refinedweb | 679 | 83.86 |
I know this is easy but I think some people out there are interested
in my idea about storing user input in an array...... i read it some where and thought about bringing it much closer to you.
import java.util.Scanner; /** * * @author BUSKER-OTT */ public class USERINPUT { public static void main(String[] args) { Scanner input = new Scanner(System.in); //allow user input; System.out.println("How many numbers do you want to enter?"); int num = input.nextInt(); int array[] = new int[num]; System.out.println("Enter the " + num + " numbers now."); for (int i = 0 ; i < array.length; i++ ) { array[i] = input.nextInt(); } //you notice that now the elements have been stored in the array .. array[] System.out.println("These are the numbers you have entered."); printArray(array); } //this method prints the elements in an array...... //if this case is true, then that's enough to prove to you that the user input has //been stored in an array!!!!!!! public static void printArray(int arr[]){ int n = arr.length; for (int i = 0; i < n; i++) { System.out.print(arr[i] + " "); } } } | https://www.daniweb.com/programming/software-development/threads/386178/storing-user-input-into-an-array | CC-MAIN-2017-26 | refinedweb | 181 | 70.19 |
std::thread::thread
From cppreference.com
Constructs new thread object.
1) Creates new thread object which does not represent a thread.
2) Move constructor. Constructs the thread object to represent the thread of execution that was represented by
other. After this call
otherno longer represents a thread of execution.
3) Creates new
std::threadobject and associates it with a thread of execution. First the constructor copies all arguments
args...to thread-local storage as if by the function:
template <class T> typename decay<T>::type decay_copy(T&& v) { return std::forward<T>(v); }
Any exceptions thrown during evaluation and copying of the arguments are thrown in the current thread, not the new thread.
The code that will be run in the new thread is defined as follows. Let's refer to
copied_argsas
t1,
t2, ...,
tN, where
Nis sizeof...(copied_args)and
copied_argsis the result of calling
decay_copyas defined above. The following code will be run in the new thread:
- If
fis pointer to a member function of class
T, then it is called. The return value is ignored. Effectively, the following code is executed:
- (t1.*f)(t2, ..., tN) if the type of
t1is either
T, reference to
Tor reference to type derived from
T.
- ((*t1).*f)(t2, ..., tN) otherwise.
- If N == 1 and
fis pointer to a member data object of a class, then it is accessed. The value of the object is ignored. Effectively, the following code is executed:
- t1.*f if and the type of
t1is either
T, reference to
Tor reference to type derived from
T.
- (*t1).*f otherwise.
fis called as a pointer to a non-member function in all other cases. The return value is ignored. Effectively, f(t1, t2, ..., tN) is executed.
4) The copy constructor is deleted; threads are not copyable. No two
std::threadobjects may represent the same thread of execution.
Parameters
Exceptions
3) std::system_error if the thread could not be started. The exception may represent the error condition
std::errc::resource_unavailable_try_againor another implementation-specific error condition.
Notes
The arguments to the thread function are copied by value. If a reference argument needs to be passed to the thread function, it has to be wrapped (e.g. with std::ref or std::cref).
Any return value from the function is ignored. If the function throws an exception, std::terminate is called. In order to pass return values or exceptions back to the calling thread, std::promise or std::async may be used.
Example
#include <iostream> #include <utility> #include <thread> #include <chrono> #include <functional> #include <atomic> void f1(int n) { for(int i=0; i<5; ++i) { std::cout << "Thread " << n << " executing\n"; std::this_thread::sleep_for(std::chrono::milliseconds(10)); } } void f2(int& n) { for(int i=0; i<5; ++i) { std::cout << "Thread 2 executing\n"; ++n; std::this_thread::sleep_for(std::chrono::milliseconds(10)); } } int main() { int n = 0; std::thread t1; // t1 is not a thread std::thread t2(f1, n+1); // pass by value std::thread t3(f2, std::ref(n)); // pass by reference std::thread t4(std::move(t3)); // t4 is now running f2(). t3 is no longer a thread t2.join(); t4.join(); std::cout << "Final value of n is " << n << '\n'; }
Possible output:
Thread 1 executing Thread 2 executing Thread 1 executing Thread 2 executing Thread 1 executing Thread 2 executing Thread 1 executing Thread 2 executing Thread 1 executing Thread 2 executing Final value of n is 5 | http://en.cppreference.com/mwiki/index.php?title=cpp/thread/thread/thread&oldid=43276 | CC-MAIN-2013-20 | refinedweb | 572 | 58.18 |
Supercharge Your Flex App with ColdFusion PowerBy Kai Koenig
There are quite a few tutorials here at SitePoint that can help you grasp some of the key principles of creating Rich Internet Applications (RIAs) using Flex and AIR. You’ll find that most development undertakings in Flex will involve a back-end application to interact with the Flex client.
Let’s discuss some of the theories and principles behind what makes up a Flex application, and then put those principles into practice with a ColdFusion app. We’ll assume here that you have some experience already with ColdFusion development.
Pay attention, because there’s a quiz at the end. The first 100 people to complete the quiz will win themselves a copy of Getting Started With Flex 3, thanks to our sponsor, Adobe. Take the quiz!
Understanding a Rich Internet Application’s Architecture
From a high level point of view, the common systems architecture of a web application usually comprises three layers.
The bottom tier consists of a data storage layer, usually a relational database system such as Microsoft’s SQL Server, Oracle, or MySQL. Such a layer provides a relational table model that can be used to store and retrieve application data.
The layer above the data storage layer is known as the application server, or middleware. Commonly used technologies in this playground are Adobe’s ColdFusion, Java, PHP, Ruby on Rails, or .NET. Those platforms are used to develop business and data access logic.
On top of that, or perhaps even embedded in the middleware, we’ll find a layer responsible for HTTP delivery – that is, web servers like IIS or Apache. In Rich Internet Applications, architects sometimes have to deal with other protocols besides HTTP: technologies such as Flash Media Server, for example, support the real-time streaming protocol RTMP.
In my earlier tutorial, I showed how a Flex application could communicate with other applications and data on the client side. This time, we’ll now communicate with our business and data layers on the server side.
How Flex Communicates
Flex can access remote data mainly by three different approaches:
- HTTP calls to flat files, XML files or dynamic URLs that deliver data Flex can work with
- SOAP-based web service calls
- Action Message Format (AMF) remote object calls.
Each of these methods is represented by different ActionScript classes and MXML tags. It’s fair to say that the MXML tag syntax is probably easier to use when coming from a ColdFusion background, because you’re comfortable being able to use a similar syntax to ColdFusion’s CFML syntax.
HTTPService: Retrieving Data Over HTTP
Let’s think about the tag
mx:HTTPService. Unsurprisingly, this is an HTTP-based service, which we can use to grab some data from elsewhere at runtime. In MXML, such a service would be declared as below:
<mx:HTTPService
The
id attribute provides a reference to the service object. The
url attribute points to a static XML file that’s accessible via HTTP, but could also point to a local file, like so:
<mx:HTTPService
In this case the Flex application would expect to find the xml file in the same path as itself.
Naturally, it’s possible to have ColdFusion generate some XML for us. Here’s a very simple way of doing this:
<cfsavecontent variable="xmlContent"><data>
<result>
<item>Taxes</item>
<amount>2000</amount>
</result>
...
</data></cfsavecontent>
<cfcontent reset="true" type="text/xml"><cfoutput>#xmlContent#</cfoutput>
The tag stores created static or dynamic content in a variable,
xmlContent. We then use a
cfcontent tag to reset any output we may have created earlier and specify a
text/xml content-type. We then deliver the content of the variable
xmlContent using a simple
cfoutput tag. Voilà , out comes some XML.
On the next page, we’ll look at a Flex application that would consume that XML.
A Flex application to cater for such a service call would look like:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application
xmlns:
<mx:Script>
<![CDATA[
import mx.controls.Alert;
import mx.rpc.events.ResultEvent;
import mx.rpc.events.FaultEvent;
import mx.collections.ArrayCollection;
[Bindable]
private var budgetData:ArrayCollection = new ArrayCollection();
private function cfmFileRH(e:ResultEvent):void
{
budgetData = e.result.data.result as ArrayCollection;
}
private function cfmFileFH(e:FaultEvent):void
{
mx.controls.Alert.show(e.fault.faultString,"Error when loading XML");
}
]]>
</mx:Script>
<mx:HTTPService
<mx:DataGrid
</mx:DataGrid>
</mx:Application>
Here’s what’s happening: we’re using an
mx:HTTPService to grab some data, and storing the result from the service in a variable named
budgetData. The
creationComplete event handler on the tag sends the
HTTPService request when the application is created. Finally, a
mx:DataGrid object displays the data as a table.
Looking closely at the
mx:HTTPService we can see that it includes two attributes:
result and
fault. In Flex, any data service request is dealt with asynchronously, meaning that the Flex application will immediately broadcast a result event instead of waiting for a response. All service tags offer the attributes
result and
fault to deal with either of these outcomes. In those attributes you’d just specify the name of the method you’d like to be called whenever a result (or a fault) comes back from the service call.
Practically speaking, an instance of
mx:HTTPService is limited to
GET and
POST requests, although in principle other requests can be issued – but these are unavailable unless the HTTP requests are being routed via a proxy server, such as Adobe’s BlazeDS or LiveCycle Data Services. The reason for that limitation is the security model of the Flash Player, which only supports direct
GET or
POST calls via HTTP. That’s where another concept enters the game: a cross-domain policy file.
Normally, a Flash-based application using an
mx:HTTPService or a SOAP-based web service may only retrieve data from a service or file stored on the same server as the .swf file. To determine whether the data is being pulled from the same domain, Flash Player compares the domain names used for the Flash file and the remote data source. This means that an application loaded from is unable to access HTTP data from – even though they are the same server. One solution is to use a cross-domain policy file, named crossdomain.xml, placed in the web root of the server that’s meant to provide the data. You can learn more about how to use a cross-domain policy file in Adobe’s documentation.
Next, let’s find out about sending request parameters within Flex.
Sending Request Parameters
Perhaps you need to send parameters to your service. The easiest way to pass in parameters is by adding an
mx:request tag as a nested tag of the HTTP service. Let’s imagine we need to send two parameters,
dataMethod and
userType, to a ColdFusion script. We’d use the
mx:request element like so:
<mx:HTTPService
<mx:request>
<dataMethod>getAll</dataMethod>
<userType>administrator</userType>
</mx:request>
</mx:HTTPService>
Since we’re using the HTTP
POST method, all the request variables we’re creating here will become ColdFusion variables in ColdFusion’s
FORM scope. On the ColdFusion side, we’re therefore going to end up with
FORM.dataMethod as well as
FORM.userType on the ColdFusion template. If you’d chosen HTTP
GET (by specifying
method="GET"), your request data would become URL variables:
URL.userType and
URL.dataMethod in this case.
So far we’ve just looked into returning XML data via HTTP services from ColdFusion templates. Although that’s a very common way to interact with HTTP services and ColdFusion, there are some other, alternative return formats for HTTP services that in some occasions might be more appropriate to use:
object: response data is XML and will be converted into a tree of ActionScript objects (
ArrayCollection,
ObjectProxy)
xml: response data is XML and will be converted into an ActionScript object of type
XMLnode– this is a legacy format which is only here for compatibility; it’s best to avoid using it.
e4x: response data is XML and will be converted into an ActionScript XML object
flashvars: response data is a chain of key/value-pairs:
name1=value1&'name2=value2, and so on
text: response data is text, no conversion happens.
resultFormat="object"is the default, and is quite often the right way to go. If you want to work with XML instead of
ArrayCollectionsand
ObjectProxy,
e4x(ECMAScript for XML) is the preferred result format -
xmlmakes use of a deprecated set of API classes that's part of Flex for compatibility reasons only.
Providing a Service with ColdFusion
So: we've spent a good amount of this article talking about HTTP services and the general structure of a service. Now that we understand how these work, discussing web services and remote objects becomes much easier.
Let's take the following ColdFusion component that offers one simple
echomethod to the outside world, and try to build a Flex client that hooks into that CFC:
<cfcomponent>x
<cffunction
name="echo"
returntype="string"
access="remote"
output="false"
hint="I echo whatever I'm being passed">
<cfargument name="input" type="string" required="true">
<cfreturn "Echoing..." & arguments.input>
</cffunction>
</cfcomponent>
ColdFusion offers two ways of exposing the functionality of that component's method to a caller: as a SOAP-based web service or as a remote object. Let's try each.
SOAP
Starting with the SOAP web service, we've made the echo method a web service method by setting the attribute
access="remote"in the
cffunctiontag. At the same time, this setting will enable us to call the method as a remote object. We'd like to retrieve the data in the WSDL format, which can be accomplished simply by adding
?WSDLto the end of the URL.
To grab that WSDL in Flex, we start off using a
mx:WebService, which is structured in much the same manner as a
mx:HTTPService:
<mx:WebService
With web services you'll quite often find that a service offers multiple methods, and that you'd prefer to specify a result or fault handler method per service. That's possible when using
mx:operation:
<mx:WebService
<mx:operation
</mx:WebService>
Providing parameters to web service calls works in a similar way to providing those to HTTP service calls - via
mx:request- but may also be done by using a syntax familiar to developers coming from languages like Java or C++:
dataWS.echo("Hello")
Action Message Format
Now, let's try this with the Action Message Format (AMF). AMF is a protocol that's been around for quite a while, and AMF3's specification was publicly released by Adobe in December 2007. On the server side you'd need an AMF3-enabled service - luckily, as a ColdFusion developer you're ready to provide this type of service right out of the box.
In Flex we can use the
mx:RemoteObjecttag to communicate with remote objects:
<mx:RemoteObject
<mx:method
</mx:RemoteObject>
Comparing this code with the equivalent web service declaration in Flex, there are a few small differences: we're dealing with a destination named "ColdFusion" now, and
mx:operationhas changed to
mx:method.
When one first sets up a Flex project for ColdFusion in Flex Builder, the setup wizard will ask a bunch of questions regarding the location of the ColdFusion server, its port, URL, and context root. That information provided will then be used at the compile time of the Flex application to provide Flex with the details of your AMF URL. This works smoothly and easily for simple data types such as strings or Booleans, or even built-in ColdFusion types such as arrays and structures.
You may even wish to deal with entire objects from your application's business domain, and transfer them back and forth. There's a design pattern called data transfer object or value object that describes such a process, and it's fairly simple to use this method with ColdFusion components. Your service configuration in Flex will mostly remain the same, but instead of sending or expecting simple types from your remote object call, it will be a domain object: an object that represents a customer, shopping cart, employee, or whatever it is you plan to deal with. To allow Flex to deal with such complex object types, ColdFusion has to know about them as well, and there would have to be an equivalent component in ColdFusion. Here's a CFC that describes a user:
<cfcomponent displayname="User" output="false">
<cfproperty name="user_id" type="numeric" />
<cfproperty name="user_name" type="string" />
<cffunction name="init" access="public" returntype="example.User" output="false">
<cfargument name="user_id" type="numeric" required="false" />
<cfargument name="user_name" type="string" required="false" />
<cfset this['user_id'] = arguments.user_id />
<cfset this['user_name'] = arguments.user_name />
<cfreturn this />
</cffunction>
</cfcomponent>
In the following minimal ActionScript 3 class, we also describe what a User object ought to contain:
[Bindable]
[RemoteClass(alias="example.User")]
public class User
{
public var user_id:int=0;
public var user_name:String="";
}
The metadata tag
[Bindable]allows
Userto be used in Flex data bindings.
[RemoteClass(alias="example.User")]maps the CFC type
example.Useron the server end to the
Userclass here in ActionScript.
Now What?
Where do we go from here? If you're only using ColdFusion, my guess is that you'll find it's easier to just use RemoteObjects for these kinds of projects. If you have a mixed environment comprising multiple technologies, it might be worth looking into web services and HTTP service calls to load XML. Either way, you're now equipped to deal with either situation: time to have fun!
Think you're ready to kick butt with ColdFusion? Why not test your newfound knowledge with our quiz - just five easy questions, and all the answers were right here in this article. The first 100 people to take our quiz will win a copy of Getting Started with Flex delivered straight to your door absolutely free. Grab yours!
No Reader comments | https://www.sitepoint.com/flex-app-coldfusion-power/ | CC-MAIN-2016-44 | refinedweb | 2,339 | 51.58 |
Opened 2 years ago
Closed 2 years ago
#20466 closed Cleanup/optimization (wontfix)
Idiomatic Python: use lists instead of tuples where appropriate
Description
Currently most of the documented lists are actually tuples. This is uncalled for and not really pythonic.
Python authors have already stated that tuple was not a frozen list. Let's stick to using tuples where tuples fit, eg. where a namedtuple could be used instead but would be an overkill.
Change History (7)
comment:1 Changed 2 years ago by patrys
- Needs documentation unset
- Needs tests unset
- Owner changed from nobody to patrys
- Patch needs improvement unset
- Status changed from new to assigned
comment:2 Changed 2 years ago by akaariai
- Triage Stage changed from Unreviewed to Accepted
comment:3 Changed 2 years ago by mjtamlyn
I'm concerned about the backwards compatibility of this patch - I know in a number of my projects I import the default value from global settings and combine it with another list/tuple of my own appropriately. It's a minor issue, but I'm not sure it's worth breaking things for the sake of a little properness. Nothing is broken at the moment.
comment:4 Changed 2 years ago by patrys
@mjtamlyn, can you provide a case that no longer works?
comment:5 Changed 2 years ago by timo
I had similar concerns. For example:
from django.conf.global_settings import TEMPLATE_CONTEXT_PROCESSORS TEMPLATE_CONTEXT_PROCESSORS + ('myapp.myprocessor',)
After this change, the above code would throw an error: TypeError: can only concatenate list (not "tuple") to list.
comment:6 Changed 2 years ago by aaugustin
Like Marc and Tim, I don't think the disruption caused by this change is really worth it.
comment:7 Changed 2 years ago by timo
- Resolution set to wontfix
- Status changed from assigned to closed
Pull request: | https://code.djangoproject.com/ticket/20466 | CC-MAIN-2015-27 | refinedweb | 301 | 60.75 |
On 15/09/2010 08:56, "Dong, Eddie" <eddie.dong@xxxxxxxxx> wrote:
>> that the partial decode from vmexit reason saves you much at all, and
>> you might as well go the whole hog and do full decode. I don't see
>> much saving from a hacky middle-ground.
>
> So how about we reuse some functions in x86 emulate like this one?
Ah, well, now I look at your patch 06/16 properly, I think it's clear and
self-contained as it is. Your private enumerations within nest.c simply
serve to document the format of the decoded instruction provided to you via
fields in the VMCS. I wouldn't be inclined to change it at all, unless Tim
really has strong objections about it. It's not like you're defining
namespaces for new abstractions you have conjured from thin air -- they
correspond directly to a hardware-defined decode format. Defining
enumerations on top of that is *good*, imo. I would take 06/16 as it stands.
-- Keir
> static enum x86_segment
> decode_segment(uint8_t modrm_reg)
> {
> switch ( modrm_reg )
> {
> case 0: return x86_seg_es;
> case 1: return x86_seg_cs;
> case 2: return x86_seg_ss;
> case 3: return x86_seg_ds;
> case 4: return x86_seg_fs;
> case 5: return x86_seg_gs;
> default: break;
> }
> return decode_segment_failed;
> }
>
> Thx, Eddie
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx | http://old-list-archives.xen.org/xen-devel/2010-09/msg00909.html | CC-MAIN-2020-45 | refinedweb | 215 | 63.39 |
HI!
I am having problems on how i limit the Y axis to a certain degrees. What i mean is: When i look forward, i want the Y axis to be limited to a total of 40 degrees, so that i am not capable of looking straight up/straight down to my feet and further.
I would really appriciate it if someone could give me some help on how to make this possible, thank you.
My script (c#):
using UnityEngine; using System.Collections;
public class CamMouseLook : MonoBehaviour {
Vector2 mouseLook;
Vector2 smoothV;
public float sensitivity = 10f;
public float smoothing = 2f;
GameObject character;
void Start()
{
character = this.transform.parent.gameObject;
}
void Update()
{
var md = new Vector2(Input.GetAxisRaw("Mouse X"), Input.GetAxisRaw("Mouse Y"));
md = Vector2.Scale(md, new Vector2(sensitivity * smoothing, sensitivity * smoothing));
smoothV.x = Mathf.Lerp(smoothV.x, md.x, 1f / smoothing);
smoothV.y = Mathf.Lerp(smoothV.y, md.y, 1f / smoothing);
mouseLook += smoothV;
transform.localRotation = Quaternion.AngleAxis(-mouseLook.y, Vector3.right);
character.transform.localRotation = Quaternion.AngleAxis(mouseLook.x, character.transform.up);
}
}
Answer by Zarenityx
·
Jun 26 at 08:33 PM
With your current setup, this is as simple as using if statements, or the handy builtin Mathf.Clamp function.
To clamp some value X between A and B, you can do this:
float x, a, b; //Assuming A<=B
if(x<a){
x=a;
}else if(x>b){
x=b;
}
This is actually equivalent to a nice helper function in the unity library, used like this:
float x, a, b; //Again, a<=b
x = Mathf.Clamp(x,a,b);
Both of these snippets do the same thing. In fact, while I'm not 100% certain, the second method probably just does the first one behind-the-scenes. So in your code, you could, say, define:
public float yMin, yMax;
And then later, in Update():
Update()
mouseLook += smoothV;//Your code, so you can see where to put it
mouseLook.y = Mathf.Clamp(mouseLook.y,yMin,yMax); //The actual clamp
//etc...
here are a couple of other things I'm noticing in your camera script that are going to cause you problems. This isn't part of the main question but, since you said you were new to coding, you might want to know these since they're pretty common mistakes. (Apologies for poor formatting, lists seem to break code-formatting)
----1: Your mouse sensitivity right now is framerate dependent. You need to include md *= Time.deltaTime to correct this. See This Page for more details.
----2: Your smoothing is a bit wonky. Your smoothing is A: framerate dependent, just like pt 1, and B: your smoothing is on the velocity, which can feel a bit weird (your camera won't end up where you expect if you move the mouse around a lot.) The fix for this is to instead do your interpolation on the target position instead. Consider the following code:
md *= Time.deltaTime
Vector2 md = new Vector2(Input.GetAxisRaw("Mouse X"),Input.GetAxisRaw("Mouse Y"));
md *= sensitivity * Time.deltaTime; //Notice the removal of smoothing term here and the introduction of the Time.deltaTime term
targetLook = mouseLook+md; //I created this targetLook Vector2 here. This can replace smoothV, I just wanted a more descriptive name
targetLook.y = Mathf.Clamp(targetLook.y,yMin,yMax); //The clamping you wanted. Make sure this is applied to the target, not the actual, rotation
//Now, the interpolation
//Using Vector2.Lerp is easier than 2 Mathf.Lerps
//Note the use of Time.deltaTime
//Also note that this happens to the ROTATION, not VELOCITY
mouseLook = Vector2.Lerp(mouseLook,targetLook,Time.deltaTime/smoothing);
//And then your rotation setting code:
transform.localRotation = Quaternion.AngleAxis(-mouselook.y,Vector3.right);
character.transform.localRotation = Quaternion.AngleAxis(mouselook.x,character.transform.up);
Hoped this helps, and good luck on your project!
Hey, sorry for my late reaction! Thanks for the extra notes, i really Appriciate it! The code you have given me has worked for the most part, the clamping works perfectly, although there is one big problem... I am not sure what's causing it, but whenever i play test it the camera keeps shaking like it's having a stroke the moment i move my mouse! It says: "transform.localRotation assign attempt for 'Character' is not valid. Input rotation is { NaN, NaN, NaN, NaN }. UnityEngine.Transform:set_localRotation(Quaternion)" Do you now what that means? I have no clue 0_0
This is the script i currently have:
using UnityEngine;
using System.Collections;
public class CamMouseLook : MonoBehaviour
{
public float yMin, yMax;
float x, a, b;
Vector2 mouseLook;
Vector2 targetLook;
public float sensitivity = 10f;
public float smoothing = 2f;
GameObject character;
void Start()
{
character = this.transform.parent.gameObject;
}
void Update()
{
if (x < a)
{
x = a;
}
else if (x > b)
{
x = b;
}
mouseLook += targetLook;
targetLook.y = Mathf.Clamp(targetLook.y, yMin, yMax);
Vector2 md = new Vector2(Input.GetAxisRaw("Mouse X"), Input.GetAxisRaw("Mouse Y"));
md *= sensitivity * Time.deltaTime;
targetLook = mouseLook + md;
targetLook.x = Mathf.Lerp(targetLook.x, md.x, 1f / smoothing);
targetLook.y = Mathf.Lerp(targetLook.y, md.y, 1f / smoothing);
mouseLook = Vector2.Lerp(mouseLook, targetLook, Time.deltaTime / smoothing);
transform.localRotation = Quaternion.AngleAxis(-mouselook.y, Vector3.right);
character.transform.localRotation = Quaternion.AngleAxis(mouselook.x, character.transform.up);
}
}
Hey, sorry I got back so late too. I generally write code directly into the submission box so it's entirely possible that the code as is doesn't compile or work quite as intended. Code given is intended as a guideline or visual aid, but not to be copy-pasted. (It's always a bad idea to copy-paste code from the internet anyway.) Regardless, here's a few things I noticed:
Check if your smoothing is 0 or near 0. This can cause issues. If you want to be able to turn off smoothing, then use an if statement to bypass the Vector2.Lerp.
Get rid of the two lines with the Mathf.Lerp in them. Those two lines are superseded by the Vector2.Lerp line anyway, since they were framerate-dependent. The targetLook = mouseLook+md line and the Mathf.Clamp are the ONLY things that should modify targetLook.
targetLook = mouseLook+md
Move the clamping to AFTER the lines that adds md. This should fix some jittering. The issue it that it's being clamped, then edited, meaning it can actually go past the clamp and then later get re-clamped, causing a jarring shaking effect.
You have an if statement included that does nothing. Looks like you copy-pasted my example code of how clamping works. This was written as an example, and actually does nothing useful in your script.
Can't be sure, but it looks like you're copy-pasting. If you are, DON'T. It will make it significantly harder to figure out what's going on down the line, and even if you end up making it work, you'll learn less from it. Instead, look through the code and figure out what it's doing, then write it yourself. This will let you learn how that code functions. Treat internet code snippets as pictures in an instruction manual, not as magic potion73 People are following this question.
Limit camera RotateAround for an UAV game
0
Answers
Flip over an object (smooth transition)
3
Answers
Third person follow camera on a sphere
1
Answer
maximum angle for camera
0
Answers
C#. Rotation Help!
4
Answers | https://answers.unity.com/questions/1643801/coding-noob-struggles-to-implement-basic-thing.html | CC-MAIN-2019-43 | refinedweb | 1,224 | 58.79 |
04 July 2012 12:18 [Source: ICIS news]
LONDON (ICIS)--AkzoNobel is expected to deliver strong earnings growth in the second quarter and for the full year of 2012, largely on pricing strength, global analyst Bernstein Research said on Wednesday.
Bernstein said strong pricing power, particularly in the paints and coatings industry, remains the key earnings driver for the Netherlands-based paints and coatings producer.
“Paints and coatings further increased pricing, (+0.6% month on month, +9% year on year) in May in the ?xml:namespace>
The analyst added that EU prices stabilised in April – flat month on month and up 5% year on year.
“If we assume industry prices were to remain flat sequentially, AkzoNobel's paints and coatings businesses would have already achieved over a 5.5% price increase for 2012,” said Bernstein.
“We forecast a significant (+9%) year on year improvement in group earnings, before interest, tax, depreciation and amortisation (EBITDA),” it added.
However, despite strong pricing power in the paints and coatings industry volumes continue to be weak, especially in
"In contrast, European production volumes in April have nearly stabilized sequentially (-0.4% month on month) but have declined -7% year on year," it added.Bernstein maintains an “outperform” rating for AkzoNobel with a target share price of €53 (€67).
At 10:43 GMT, AkzoNobel’s shares were trading at €38.06 on the Euronext Amsterdam, up 0.29% on the previous close.
On Monday, JP Morgan Cazenove downgraded AkzoNobel to its “underweight” rating, from “neutral”, on weakening sales | http://www.icis.com/Articles/2012/07/04/9575085/akzonobel-expected-to-deliver-strong-growth-in-q2-2012.html | CC-MAIN-2014-49 | refinedweb | 253 | 55.24 |
A pointer is a variable that contains a memory location.
A pointer is a type of variable. It must be declared in the code and it must be initialized before it's used.
The declaration of a pointer has the following format:
type *name;
The type identifies the pointer as a char, int, float, and so on.
The name is the pointer variable's name, which must be unique.
The asterisk identifies the variable as a pointer, not as a regular variable.
The following line declares a char pointer, char_pointer:
char *char_pointer;
And this line creates a double pointer:
double *rainbow;
To initialize a pointer, you must set its value to the memory location.
That location cannot be a random spot in memory, it must be the address of another variable within the program. For example:
char_pointer = &my_char;
The preceding statement initializes the char_pointer variable to the address of the my_char variable.
Both variables are char types. After that statement is executed, the char_pointer pointer contains the address of the my_char variable.
The following code shows that that the pointer char_pointer contains the address, or memory location, of variable my_char.
#include <stdio.h> int main() /*from w ww . j a va 2 s. co m*/ { char my_char; char *char_pointer; my_char = 'A'; /* initialize char variable */ char_pointer = &my_char; /* initialize pointer IMPORTANT! */ printf("About variable 'my_char':\n"); printf("Size\t\t%ld\n",sizeof(my_char)); printf("Contents\t%c\n",my_char); printf("Location\t%p\n",&my_char); printf("And variable 'char_pointer':\n"); printf("Contents\t%p\n",char_pointer); return(0); }
The contents of pointer char_pointer are equal to the memory location of variable my_char. | http://www.java2s.com/example/c-book/get-to-know-pointer.html | CC-MAIN-2019-30 | refinedweb | 271 | 56.15 |
Please note that there is more than one way to answer most of these questions. The following only represents a sample solution.
- Maximillian Brown
- 2 years ago
- Views:
Transcription
1 HOMEWORK THREE SOLUTION CSE 355 DUE: 10 MARCH 2011 Please note that there is more than one way to answer most of these questions. The following only represents a sample solution. (1) 2.4 and 2.5 (b),(c),(e),(f): CFGs and PDAs (b) B = {w w starts and ends with the same symbol} 2.4: We will assume that to start and end with the same symbol, the string must have at least one symbol and that a single symbol starts and ends with the same symbol. S 0A0 1A1 0 1 A 0A 1A ɛ 2.5 Informal Description: We will nondeterministically guess if the string has only one symbol in which case we accept it without using the stack; otherwise, we push the first symbol read onto the stack. Then we will read every other symbol and nondeterministically guess if that is the last symbol read. If the last symbol read then matches the symbol on the stack and there is no more input we accept. Otherwise we reject. 0, 0 1, 1 0,0 1,1 (c) C = {w the length of w is odd} 2.4: S 0A 1A A 0S 1S ɛ 1
2 2.5 Informal Description: The stack is not needed here at all. Therefore we will read the input and only accept if the length is odd, that is after the first symbol read or every other symbol read thereafter if it is the final symbol read. (e) E = {w w = w R, that is, w is a palindrome} 2.4: S 0S0 1S1 0 1 ɛ 2.5 Informal Description: We begin by pushing the symbols read onto the stack. At each point we will nondeterministically guess if the middle of the string has been reached or if the next symbol read is the middle of the string and will not be put on the stack. Then we pop off the symbols from the stack if they match the input symbols read. If the symbols popped are exactly the same symbols that were pushed on earlier and the stack empties as the input is finished, then accept. Otherwise, reject. 0, 0 1, 1 0,0 1,1, $,,$ (f) F =, the emptyset 2.4: S S Note: Since, no derivations terminate, the CFG cannot accept any strings, including the empty. 2.5 Informal Description: The PDA consists of one state that does not accept. 2
3 (2) 2.9 and 2.10: Ambiguous Grammar and PDA for A = {a i b j c k i = j or j = k where i, j, k 0} 2.9: S AB CD A aab ɛ B cb ɛ C ac ɛ D bdc ɛ This grammar is ambiguous. To see this, note that ɛ A and ɛ can be derived with left derivations in two different ways. S AB ɛb ɛ and also S CD ɛd ɛ. There are many other strings that this works for in the grammar above, but it suffices to show one. In fact, it can be shown that A is inherently ambiguous as discussed on page 106 of Sipser, but we will not prove that here At the beginning nondeterministically break into two branches. In the first branch, for every a read, push an a on the stack. Nondeterministically guess when the first b is read and begin popping symbols from the stack for each b read. When the stack is empty, the only input remaining should be cs and then just read the input without adjusting the stack. If the symbols are read in the proper order (as followed by bs followed by cs) and the stack is empty by the time the b are done being read, then accept. Otherwise reject. In the second branch, for every a read do not adjust the stack. Nondeterministically guess when the first b is read and begin pushing symbols on the stack for each b read. Nondeterministically guess when the first c is read, begin popping symbols from the stack for each c read. If the stack empties precisely when the input string is done and the symbols are read in the proper order (as followed by bs followed by cs), accept. Otherwise, reject. Below is a PDA state diagram that recognizes A, but is not required for you to submit on the homework. a, a b, a c,,,$, $, $ a, b, a c,a,,,$ 3
4 (3) 2.16 and 2.17: CFGs and Regular Operations 2.16: For the following let A = (V A, Σ A, R A, S A ) and B = (V B, Σ B, R B, S B ) be CFGs with V A V B = Union: Let C = (V A V B {S}, Σ A Σ B, R A R B {S S A S B }, S). Then L(C) = L(A) L(B), since we, from the new start variable, we go to either the start variable of A, which derives a string in A, or to the start variable of B which derives a string in B. Concatenation: Let D = (V A V B {S}, Σ A Σ B, R A R B {S S A S B }, S). Then L(C) = L(A) L(B), since we from the new start variable we derive something in A followed by something in B. Star:Let E = (V A {S}, Σ A, R A {S S A S ɛ}, S). Then L(C) = (L(A)), since we from the new start variable we derive the concatenation of one or more strings in A, or ɛ. 2.17: We will show that every regular language is context free. Let A be a regular language, then there exists a regular epression R such that L(R) = A. We produce a CFG G such that L(G) = A as follows: G = grammar(r), where grammar := proc(r) If R = then return ({S},, {S S}, S), the grammar in 2.4 (f) Else if R = ɛ then return ({S},, {S ɛ}, S) Else if R = a then return ({S}, {a}, {S a}, S) Else if R = (R 1 R 2 ) then return the CFG for the union of grammar(r 1 ) and grammar(r 2 ) as specified in 2.16 Else if R = R 1 R 2 then return the CFG for the concatenation of grammar(r 1 ) and grammar(r 2 ) as specified in 2.16 Else if R = (R 1 ) then return the CFG for the star of grammar(r 1 ) as specified in 2.16 End Then L(G) = A since at each step of the procedure a grammar is produced that accepts the same language as R. The procedure is guaranteed to terminate since each regular expression R is finite and on each recursive call a smaller regular expression is passed to the procedure (since in the final three cases R 1 and R 2 are proper subcomponents of R). 4
5 (4) 2.35: Grammars in CNF whose languages have infinite length Let G = (V, Σ, R, S) be a CFG in Chomsky normal form that contains b variables and assume that G generates some string with a derivation having at least 2 b steps. We want to show that L(G) is infinite, or that there must exist a path along its parse tree that has a variable repeated. If there is a path that contains b + 1 variables, then by the pigeon-hole principle, one of the variables must be repeated. Thus, we want to show that if G generates some string with a derivation having at least 2 b steps then it must have a parse tree with a height of b + 1. For a given height, h, of a parse tree for a grammar in Chomsky normal form, that parse tree could have at most 2 h 1 steps in a derivation of a string. This is shown by induction on the height of the tree. The base case is h = 1 in which case the derivation is S a for some a Σ {ɛ}. Thus, there is 1 step to the derivation and 1 = Assume that a parse tree of height h has at most 2 h 1 steps to any derivation of a string. Then for a tree of height h + 1 it could have at most 2 h steps on the last level, corresponding to if all the elements on the previous level were variables. Then the maximum number of steps in the derivation are the maximum number of steps in the derivation on the last level plus the maximum number of steps for a derivation of the previous level which is given by the induction hypothesis. Thus, we get 2 h + 2 h 1 = 2(2 h ) 1 = 2 h+1 1, which completes the induction. Since G has a string with a derivation of 2 b steps, the parse tree for that string must have a height of at least b + 1 (since with height b it could have a derivation of at most 2 b 1 steps as shown above). Since the height of the tree is at least b + 1, then the longest path on that tree must have at least b + 2 nodes with only the last node a terminal. Thus, there must be at least b + 1 variables on that path. Let A be a variable that repeats on that longest path, then the derivation of that string looks like S uaz uvay z uvxyz where u, v, x, y, z Σ and y, z V. By applying the derivation A vay i times on that path for A followed by the derivation A x, we generate valid parse trees for the strings uv i xy i z for i 0. Thus, for each i 0, uv i xy i z L(G). Whence, there are an infinite number of strings in L(G). 5
6 (5) 1.54: F = {a i b j c k i, j, k 0 and if i = 1 then j = k}, a nonregular language that works in the pumping lemma a. Show F is not regular To see a solution that uses the reverse to show that F is not regular, see the solution given on the first midterm for 1c. We ll give another solution here. For a contradiction assume F is regular, the F {ab i c j i, j 0} = {ab i c i i 0} = G is regular since {ab i c j i, j 0} is the language of the regular expression ab c, it is regular, and regular languages are closed under intersection. We will now use the pumping lemma to show that G cannot be regular, which will contradict the fact the we assumemed F is regular. Let p be the pumping length of G and take s = ab p c p G with s > p. Then there exists x, y, z such that s = xyz and (1) xy i z G for all i 0, (2) y > 0 and (3) xy p. We will show that for any valid choice of xyz that xy 2 z / G. Case 1: y = a. Then xy 2 z = a 2 b p c p / G. Case 2: y = ab k for some k p 1 from (3). Then, xy 2 z = ab k ab p c p / G. Case 3: y = b k for some k p 1 from (2) and (3). Then xy 2 z = ab p+k c p / G. Thus, for every possibility of x, y, z, xy 2 z / G. Therefore, G cannot be regular, a contradiction. Thus, we conclude that F cannot be regular. b. Show that F acts like a regular language in the pumping lemma. First I must give a p. Take p = 2 although any p higher will work (but p = 1 will not, aabc 2 F but pumping down abc 2 / F ). Now we must show for any string s F, with s 2, can be written as xyz such that (1) xy i z G for all i 0, (2) y > 0 and (3) xy p. Again, we will proceed by cases on the number of as in s. Case 1: s = b j c k F for some i, k such that j + k 2 (so s 2). Then take x = ɛ, y equal to the first symbol in s, and z equal to the remaining symbols in s. Then (2) and (3) hold and xy i z = b j+(i 1) c k F if j 0, or xy i z = c k+(i 1) F if j = 0, so (1) also holds. Case 2: s = ab j c j F for some j 1 (so s 2). Then take x = ɛ, y = a, and z = b j c j. Then (2) and (3) hold and xy i z = a 1+(i 1) b j c j F, so (1) also holds. Case 3: s = a 2 b j c k F for some j, k 0 (so s 2). Then take x = ɛ, y = a 2 (a won t work here, because of pumping down), and z = b j c k. Then (2) and (3) hold and xy i z = a 2+2(i 1) b j c k F, so (1) also holds. Case 4: s = a j b k c m F for some j 3, k, m 0 (so s 2). Then take x = ɛ, y = a, and z = a j 1 b k c m. Then (2) and (3) hold and xy i z = a j+(i 1) b k c m F, so (1) also holds. Thus, in every case we see that the conditions of the pumping lemma hold. Therefore F satisfies the pumping lemma. c. Explain why parts (a) and (b) do not contradict the pumping lemma. The pumping lemma states that If a language is regular then in can be pumped. However, the converse of the statement, If a language can be pumped then it is regular, need not be true as shown in parts (a) and (b). This is because the converse of an implication is not equivalent to the implication (as can be shown with truth tables). Thus, the pumping lemma is not contradicted. 6)
C H A P T E R Regular Expressions regular expression
7 CHAPTER Regular Expressions Most programmers and other power-users of computer systems have used tools that match text patterns. You may have used a Web search engine with a pattern like travel cancun
Automata and Formal Languages
Automata and Formal Languages Winter 2009-2010 Yacov Hel-Or 1 What this course is all about This course is about mathematical models of computation We ll study different machine models (finite automata,
INSTITUTE OF AERONAUTICAL ENGINEERING
INSTITUTE OF AERONAUTICAL ENGINEERING DUNDIGAL 500 043, HYDERABAD COMPUTER SCIENCE AND ENGINEERING TUTORIAL QUESTION BANK Name : FORMAL LANGUAGES AND AUTOMATA THEORY Code : A40509 Class : II B. Tech II
Computing Functions with Turing Machines
CS 30 - Lecture 20 Combining Turing Machines and Turing s Thesis Fall 2008 Review Languages and Grammars Alphabets, strings, languages Regular Languages Deterministic Finite and Nondeterministic Automata
Honors Class (Foundations of) Informatics. Tom Verhoeff. Department of Mathematics & Computer Science Software Engineering & Technology
Honors Class (Foundations of) Informatics Tom Verhoeff Department of Mathematics & Computer Science Software Engineering & Technology c 2011, T. Verhoeff @ TUE.NL 1/20 Information
ÖVNINGSUPPGIFTER I SAMMANHANGSFRIA SPRÅK. 15 april 2003. Master Edition
ÖVNINGSUPPGIFTER I SAMMANHANGSFRIA SPRÅK 5 april 23 Master Edition CONTEXT FREE LANGUAGES & PUSH-DOWN AUTOMATA CONTEXT-FREE GRAMMARS, CFG Problems Sudkamp Problem. (3.2.) Which language generates the grammar.
Model 2.4 Faculty member + student
Model 2.4 Faculty member + student Course syllabus for Formal languages and Automata Theory. Faculty member information: Name of faculty member responsible for the course Office Hours Office Number Email
Regular Expressions with Nested Levels of Back Referencing Form a Hierarchy
Regular Expressions with Nested Levels of Back Referencing Form a Hierarchy Kim S. Larsen Odense University Abstract For many years, regular expressions with back referencing have been used in a variety
Lecture 2: Regular Languages [Fa 14]
Caveat lector: This is the first edition of this lecture note. Please send bug reports and suggestions to jeffe@illinois.edu. But the Lord came down to see the city and the tower the people were building.
SRM UNIVERSITY FACULTY OF ENGINEERING & TECHNOLOGY SCHOOL OF COMPUTING DEPARTMENT OF SOFTWARE ENGINEERING COURSE PLAN
Course Code : CS0355 SRM UNIVERSITY FACULTY OF ENGINEERING & TECHNOLOGY SCHOOL OF COMPUTING DEPARTMENT OF SOFTWARE ENGINEERING COURSE PLAN Course Title : THEORY OF COMPUTATION Semester : VI Course : June Automata Theory. Reading: Chapter 1
Introduction to Automata Theory Reading: Chapter 1 1 What is Automata Theory? Study of abstract computing devices, or machines Automaton = an abstract computing device Note: A device need not even be a
CS5236 Advanced Automata Theory
CS5236 Advanced Automata Theory Frank Stephan Semester I, Academic Year 2012-2013 Advanced Automata Theory is a lecture which will first review the basics of formal languages and automata theory and then
Chapter 7 Uncomputability
Chapter 7 Uncomputability 190 7.1 Introduction Undecidability of concrete problems. First undecidable problem obtained by diagonalisation. Other undecidable problems obtained by means of the reduction
Lecture 5: Context Free Grammars
Lecture 5: Context Free Grammars Introduction to Natural Language Processing CS 585 Fall 2007 Andrew McCallum Also includes material from Chris Manning. Today s Main Points In-class hands-on exercise A:
Bottom-Up Parsing. An Introductory Example
Bottom-Up Parsing Bottom-up parsing is more general than top-down parsing Just as efficient Builds on ideas in top-down parsing Bottom-up is the preferred method in practice Reading: Section 4.5 An Introductory
INCIDENCE-BETWEENNESS GEOMETRY
INCIDENCE-BETWEENNESS GEOMETRY MATH 410, CSUSM. SPRING 2008. PROFESSOR AITKEN This document covers the geometry that can be developed with just the axioms related to incidence and betweenness. The.
10CS35: Data Structures Using C
CS35: Data Structures Using C QUESTION BANK REVIEW OF STRUCTURES AND POINTERS, INTRODUCTION TO SPECIAL FEATURES OF C OBJECTIVE: Learn : Usage of structures, unions - a conventional tool for handling a
Reading 13 : Finite State Automata and Regular Expressions
CS/Math 24: Introduction to Discrete Mathematics Fall 25 Reading 3 : Finite State Automata and Regular Expressions Instructors: Beck Hasti, Gautam Prakriya In this reading we study a mathematical model
OHJ-2306 Fall 2011 Introduction to Theoretical Computer Science
1 OHJ-2306 Fall 2011 Introduction to Theoretical Computer Science 2 3 Organization & timetable Lectures: prof. Tapio Elomaa, M.Sc. Timo Aho Tue and Thu 14 16 TB220 Aug. 30 Dec. 8, 2011 Exceptions: Thu
ASSIGNMENT ONE SOLUTIONS MATH 4805 / COMP 4805 / MATH 5605
ASSIGNMENT ONE SOLUTIONS MATH 4805 / COMP 4805 / MATH 5605 (1) (a) (0 + 1) 010 (finite automata below). (b) First observe that the following regular expression generates the binary strings with an even
Graph Theory Problems and Solutions
raph Theory Problems and Solutions Tom Davis tomrdavis@earthlink.net November, 005 Problems. Prove that the sum of the degrees of the vertices of any finite graph is
Turing Machines: An Introduction
CIT 596 Theory of Computation 1 We have seen several abstract models of computing devices: Deterministic Finite Automata, Nondeterministic Finite Automata, Nondeterministic Finite Automata with ɛ-transitions,
Formal language and Automata Theory Course File
Formal language and Automata Theory Course File D.HIMAGIRI, Asst.Professor, CSE Department, JBIET. 2013-14 COURSE PLAN FACULTY DETAILS: Name of the Faculty:: Designation: Department:: D. HIMAGIRI Asst.Professor
CS143 Handout 08 Summer 2008 July 02, 2007 Bottom-Up Parsing
CS143 Handout 08 Summer 2008 July 02, 2007 Bottom-Up Parsing Handout written by Maggie Johnson and revised by Julie Zelenski. Bottom-up parsing As the name suggests, bottom-up parsing works in the opposite
Philadelphia University Faculty of Information Technology Department of Computer Science First Semester, 2007/2008.
Philadelphia University Faculty of Information Technology Department of Computer Science First Semester, 2007/2008 Course Syllabus Course Title: Theory of Computation Course Level: 3 Lecture Time: Course
Increasing Interaction and Support in the Formal Languages and Automata Theory Course
Increasing Interaction and Support in the Formal Languages and Automata Theory Course Susan H. Rodger Duke University ITiCSE 2007 June 25, 2007 Supported by NSF Grant DUE 0442513. Outline Overview of JFL ].
Theory of Computation
Theory of Computation For Computer Science & Information Technology By Syllabus Syllabus for Theory of Computation Regular Expressions and Finite Automata, Context-Free Grammar s
CS 103X: Discrete Structures Homework Assignment 3 Solutions
CS 103X: Discrete Structures Homework Assignment 3 s Exercise 1 (20 points). On well-ordering and induction: (a) Prove the induction principle from the well-ordering principle. (b) Prove the well-ordering
COMP 356 Programming Language Structures Notes for Chapter 4 of Concepts of Programming Languages Scanning and Parsing
COMP 356 Programming Language Structures Notes for Chapter 4 of Concepts of Programming Languages Scanning and Parsing The scanner (or lexical analyzer) of a compiler processes the source program, recognizing
Push-down Automata and Context-free Grammars
14 Push-down Automata and Context-free Grammars This chapter details the design of push-down automata (PDA) for various languages, the conversion of CFGs to PDAs, and vice versa. In particular, after formally
Regular Expressions and Automata using Haskell
Regular Expressions and Automata using Haskell Simon Thompson Computing Laboratory University of Kent at Canterbury January 2000 Contents 1 Introduction 2 2 Regular Expressions 2 3 Matching regular expressions
Computer Architecture Syllabus of Qualifying Examination
Computer Architecture Syllabus of Qualifying Examination PhD in Engineering with a focus in Computer Science Reference course: CS 5200 Computer Architecture, College of EAS, UCCS Created by Prof. Xiaobo
Finite Automata. Reading: Chapter 2
Finite Automata Reading: Chapter 2 1 Finite Automaton (FA) Informally, a state diagram that comprehensively captures all possible states and transitions that a machine can take while responding to a stream
We now explore a third method of proof: proof by contradiction.
CHAPTER 6 Proof by Contradiction We now explore a third method of proof: proof by contradiction. This method is not limited to proving just conditional statements it can be used to prove any kind of statement
Mathematical Induction
Mathematical Induction (Handout March 8, 01) The Principle of Mathematical Induction provides a means to prove infinitely many statements all at once The principle is logical rather than strictly mathematical,
Theory of Computation Class Notes 1
Theory of Computation Class Notes 1 1 based on the books by Sudkamp and by Hopcroft, Motwani and Ullman ii Contents 1 Introduction 1 1.1 Sets.............................................. 1 1.2 Functions
Factorizations: Searching for Factor Strings
" 1 Factorizations: Searching for Factor Strings Some numbers can be written as the product of several different pairs of factors. For example, can be written as 1, 0,, 0, and. It is also possible to write
Lesson 6: Proofs of Laws of Exponents
NYS COMMON CORE MATHEMATICS CURRICULUM Lesson 6 8 Student Outcomes Students extend the previous laws of exponents to include all integer exponents. Students base symbolic proofs on concrete examples to,
Mathematical Induction. Lecture 10-11
Mathematical Induction Lecture 10-11 Menu Mathematical Induction Strong Induction Recursive Definitions Structural Induction Climbing an Infinite Ladder Suppose we have an infinite ladder: 1. We can reach
Overview of E0222: Automata and Computability
Overview of E0222: Automata and Computability Deepak D Souza Department of Computer Science and Automation Indian Institute of Science, Bangalore. August 3, 2011 What this course is about What we study
1 Introduction to Counting
1 Introduction to Counting 1.1 Introduction In this chapter you will learn the fundamentals of enumerative combinatorics, the branch of mathematics concerned with counting. While enumeration problems
Computer Science Theory. From the course description:
Computer Science Theory Goals of Course From the course description: Introduction to the theory of computation covering regular, context-free and computable (recursive) languages with finite state machines,
PES Institute of Technology-BSC QUESTION BANK
PES Institute of Technology-BSC Faculty: Mrs. R.Bharathi CS35: Data Structures Using C QUESTION BANK UNIT I -BASIC CONCEPTS 1. What is an ADT? Briefly explain the categories that classify the functions
Deterministic Finite Automata
1 Deterministic Finite Automata Definition: A deterministic finite automaton (DFA) consists of 1. a finite set of states (often denoted Q) 2. a finite set Σ of symbols (alphabet) 3. a transition function
Automata, languages, and grammars
Automata, languages, and grammars Cristopher Moore January 24, 2015 Abstract These lecture notes are intended as a supplement to Moore and Mertens The Nature of Computation, and are available to anyone
Compiler Construction
Compiler Construction Regular expressions Scanning Görel Hedin Reviderad 2013 01 23.a 2013 Compiler Construction 2013 F02-1 Compiler overview source code lexical analysis tokens intermediate code generation10040 Chapter 2: Prime and relatively prime numbers
MATH10040 Chapter 2: Prime and relatively prime numbers Recall the basic definition: 1. Prime numbers Definition 1.1. Recall that a positive integer is said to be prime if it has precisely two positive
Automata and Rational Numbers
Automata and Rational Numbers Jeffrey Shallit School of Computer Science University of Waterloo Waterloo, Ontario N2L 3G1 Canada shallit@cs.uwaterloo.ca 1/40 Representations
Math Workshop October 2010 Fractions and Repeating Decimals
Math Workshop October 2010 Fractions and Repeating Decimals This evening we will investigate the patterns that arise when converting fractions to decimals. As an example of what we will be looking at,
Mathematical induction. Niloufar Shafiei
Mathematical induction Niloufar Shafiei Mathematical induction Mathematical induction is an extremely important proof technique. Mathematical induction can be used to prove results about complexity of
Basic Proof Techniques
Basic Proof Techniques David Ferry dsf43@truman.edu September 13, 010 1 Four Fundamental Proof Techniques When one wishes to prove the statement P Q there are four fundamental approaches. This document
Discrete Mathematics Problems
Discrete Mathematics Problems William F. Klostermeyer School of Computing University of North Florida Jacksonville, FL 32224 E-mail: wkloster@unf.edu Contents 0 Preface 3 1 Logic 5 1.1 Basics...............................
CmSc 175 Discrete Mathematics Lesson 10: SETS A B, A B
CmSc 175 Discrete Mathematics Lesson 10: SETS Sets: finite, infinite, : empty set, U : universal set Describing a set: Enumeration = {a, b, c} Predicates = {x P(x)} Recursive definition, e.g. sequences
Clock Arithmetic and Modular Systems Clock Arithmetic The introduction to Chapter 4 described a mathematical system
CHAPTER Number Theory FIGURE FIGURE FIGURE Plus hours Plus hours Plus hours + = + = + = FIGURE. Clock Arithmetic and Modular Systems Clock Arithmetic The introduction to Chapter described a mathematical | http://docplayer.net/27502519-Please-note-that-there-is-more-than-one-way-to-answer-most-of-these-questions-the-following-only-represents-a-sample-solution.html | CC-MAIN-2019-04 | refinedweb | 4,434 | 57.71 |
Revision history for Perl extension Device::Gsm. :) 1.58 Mon Mar 7 22:31:22 EST 2011 - Fixed RT #48229, an uninitialized value when registering to the network but getting no answer from the phone. 1.57 Mon Mar 7 20:53:03 EST 2011 - Fixed a bug in send_sms() that prevented it from working at all. The bug was introduced with the "assume_registered" option. - Fixed RT #57585. Thanks to Eric Kössldorfer for his patch and test case. - Added PDU<->latin1 conversion functions in Device::Gsm::Pdu - Note to self: first release from Australia! 1.56 Mon Nov 15 21:00:00 CET 2010 - When sending messages in text mode, now we wait a bit between the +CMSG command and the actual text. Fixes RT #61729. Thanks to Boris Ivanov for the report. - Added clear example of logging to a custom file - Added a warning for not implemented _read_messages_text() - Added a "assume_registered" option to skip GSM network registration on buggy/problematic devices. 1.55 Sun Jun 27 18:07:11 CEST 2010 - Fixed RT #58869, incorrect decoding of text7 messages. Thanks to Alexander Onokhov. 1.54 Sun Sep 6 10:44:53 CEST 2009 - Fixed RT #31565, incorrect decoding of outgoing messages due to incorrect removal of zero-length octet in PDU. Thanks to Svami Dhyan Nataraj. 1.53 Fri Aug 14 21:43:37 CEST 2009 - Fixed RT #48700, deleting SMS message with index 0 didn't work. Thanks to Vytas M. for reporting the bug. 1.52 Tue Nov 25 21:24:00 CET 2008 - Added a longer timeout on call forwarding. Thanks to Marek Jaros. 1.51 Wed Oct 29 22:32:00 CET 2008 - Added new method to set/reset call forwarding options (patch contributed by Attila Nagy: thanks!) - Removed the syslog test script (t/04syslog.t). It continuously broke tests on Solaris and basically every other setups where you are not installing as privileged user. 1.50 Tue Sep 30 22:50:00 CEST 2008 - Sometimes send_sms() could report success but no sms has really been sent, and we receive garbage. First attempt at a fix for this. Reported by Marek Jaros. 1.49 Sat Aug 9 15:12:00 CEST 2008 - Modified manufacturer() to work also with some Motorola phones, that report AT+CGMI in a slightly different way. - Changed my phone number :) 1.48 Wed Feb 28 21:55:23 CET 2007 - Fixed CPAN bug #24781, thanks to torsten at archesoft dot de for reporting. 1.47 Tue Feb 13 11:31:24 CET 2007 - Fixed t/30networks.t test script. 1.46 Tue Dec 19 22:05:31 CET 2006 - Fixed CPAN RT wishlist #23575. Again thanks to Troels for his ideas. 1.45 Tue Oct 17 18:01:43 CEST 2006 - Fixed CPAN RT bugs #21991, #21992, #22025. Thanks to Troels Jensen for accurate reporting and patch suggestions. 1.44 Wed Aug 16 08:25:50 CEST 2006 - Fixed decoding of alphabetical sender addresses. - Added correct decoding of UCS languages (Russian for example). Thanks to Nikolay Shaplov for the bug report + patch. 1.43 Sun Jul 23 17:40:15 CEST 2006 - Added ability to decode alphabetical sender addresses. Thanks to Torsten Mueller for report the issue. 1.42 Mon May 8 21:16:40 CEST 2006 - Fixed a bug in Device::Gsm::Sms delete() method that invoked a non-existent Device::Gsm::deleteMessage(). - Implemented read and delete of sms messages from different storages (ME, SM, ...). Only for PDU format. Thanks to Jurij Sikorsky for the patch. 1.41 Thu Apr 20 21:52:19 CEST 2006 - === Now Device::Gsm 1.39+ requires at least Device::Modem v1.47 === Check out on CPAN before upgrading. - Some cool fixes for better support of Iridium Satellite phones, GPRS connections and command/response faster cycle. Many thanks to Ed Wildgoose. 1.37 Sat Aug 27 12:53:00 CEST 2005 - fixed regexp to extract sim card sms messages information (thanks to Pierre Hilson <zorglups hotmail com>) 1.36 1.35 Wed Sep 15 23:43:03 CEST 2004 - finally added encoding/decoding of sms text between 8 bit ASCII (ISO-8859) and 7 bit GSM 03.38 (thanks to Stefano Pisani for his feedback) 1.34 Wed Aug 18 09:10:48 CEST 2004 - fixed delete_sms() command syntax and results parsing. Thanks to all users that reported problems. - added an example script on how to delete sms messages. 1.33 Wed May 26 13:52:43 CEST 2004 - added delete_sms() method. For now, works only in PDU mode. - fixed messages() newline regex split pattern - modified send sms in text mode to cope with slower/older devices - modified message read method to set pdu mode before executing 1.28 || 1.25 Fri Jan 23 00:59:16 CET 2004 - further documentation, troubleshooting and autoscan utility sections - added all new pdu sms token-oriented decoding classes (this is an 80% complete implementation, mostly regarding charset conversion issues) - Device::Gsm::Sms public class is now documented properly 1.24 || 1.20 Wed Jan 21 23:32:45 CET 2004 - completed full documentation (to be revised) - documented working irda/bluetooth connections (always through device::serialport :-) 1.19 1.18 Mon Mar 24 23:32:27 CET 2003 - added a complete interface to read messages on SIM card ( $gsm->messages() method and Device::Gsm::Sms objects ) - some minor cleanup 1.17 Thu Sep 12 00:22:06 CEST 2002 - added signal_quality() method to measure gsm signal power (dBm) - added an example script in [examples] folder (`report_signal.pl') 1.15 1.16 Tue Sep 3 23:37:56 CEST 2002 - fixed documentation example (thanks to Domenico Viggiani) - added `examples/send_to_cosimo.pl' script to report successful installations of Device::Gsm module :-) - improved test scripts; now work also with env vars, and without too many `connect()'s 1.14 Mon Apr 29 18:54:36 CEST 2002 - fixed errors in debugging - fixed errors in AT command response handling (still to-do) - included network registration test (must supply PIN in t/30register.t) - added send_sms.pl in examples/ 1.10 Wed Apr 10 00:20:28 CEST 2002 - fixed and renamed Pdu.pm functions - send_sms() new parameters that dispatch calls to _send_sms_pdu() and _send_sms_text() to allow text and PDU modes - send_sms() now works! (with siemens C25/35/45) 1.09 Tue Apr 9 00:19:02 CEST 2002 - Pdu.pm added with functions to treat PDU encoded data - added _send_sms_pdu() internal method 1.08 Sat Apr 6 00:24:29 CEST 2002 - hangup() method - imei() / serial_number() method with test case 1.07 Wed Apr 3 23:38:51 CEST 2002 - removed unnecessary javadoc style tags - made use of new parse_answer() call - now service_center() works - device identification functions now work - improved test suite - fixed logging class mechanism 1.06 Sat Mar 30 16:43:55 CET 2002 - added manufacturer(), model(), software_version() methods to get information about the device - added test_command() to test support for a specific GSM command 1.05 Sat Mar 30 15:12:09 CET 2002 - added service_number() method 1.04 Mon Mar 25 07:34:52 CET 2002 - added pod, html, text docs 0.01 Tue Feb 26 22:15:31 CET 2002 - original version; created by h2xs 1.20 with options -A -X -n Device::Gsm -v 0.01 | https://metacpan.org/changes/distribution/Device-Gsm | CC-MAIN-2015-18 | refinedweb | 1,209 | 75 |
CodePlexProject Hosting for Open Source Software
Hey,
In a new Orchard site (Yup i'm a noob, it's my first one) I have a custom "product" content type.
On a page with a list of these types, I already changed the summary of the items, and it looks nice. But now I want to have a sub-navigation listing all the products in the list. (the subnavigation to be positioned in the same Zone as the list itsself, not
in the menu or so)
What is the best way to achieve that? I thought about overriding the list rendering itsself, and add a foreach loop over the list items, but I can't seem to find out how to override the list itsself)
Thanks in advance!
Did some further research on this topic and I found this:
But I cannot seem to get it to work. For example; I tried the following as a test but is does not render my own template: "Parts.Products.Product.List.cshtml" (products is the list to which different "product" content items are added)
Would really appreciate some help here :s
Did you try to use shape tracing in order to determine whether that is really the name you should be using for your template?
Thanks for your response.
I tried, yes.
But I did not find a suitable name to override lists. I only found out about the alternate in this screenshot: (Tried to use this one, but it changes the admin lists as well...)
What am I missing?
The driver for your products part will have to either produce a different shape than List, or set a display type that you can use to differentiate the template.
hmm okay I understand, but the Products part isn't generated in code by me. Both the "product" and the list "products" are built in the admin with some parts and fields.
Or is it possible to somehow override the driver's code responsible for the shape generation?
In that case you could build a handler that creates an alternate for the shape, but for that to work, you will need to find a way to differentiate a list of products from a regular list. Can you do that?
I guess I can use the "ContentType" in one way or another in the handler. If that's what you mean, I can give it a try :)
Thanks for the info so far!
One extra question. If i'm not mistaken, I can put the handler right in my theme, yes? Just include it in the project file and the dependency injection system will take care of the rest, right?
Yes, that's correct. The handler will then be active as long as the theme itself is.
Hmm, from looking around I think I must override the BuildDisplayShape method in the contenthandler? Yes?
But my contenthandler never gets activated, so it seems.
But the breakpoints are never hit... Am i forgetting something?
PS: this is the handlers code so far: (not much ;) )
public class ProductContentHandler : ContentHandler
{
public ProductContentHandler()
{
} <breakpoint here>
protected override void BuildDisplayShape(BuildDisplayContext context)
{
base.BuildDisplayShape(context);
} <and breakpoint here>
}
And... the csproj does point to your handler file, right?
Aha no it doesn't. Shame on me :) Got the handler working, thanks for your help.
Besides this question; I was able to solve the original problem by creating a widget for the subnavigation... I then can include the widget on the page(s) that need it. Perhaps not the most elegant solution, but it works nonetheless :)
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://orchard.codeplex.com/discussions/269687 | CC-MAIN-2017-39 | refinedweb | 630 | 73.27 |
Free online content available in this course.
course.header.alt.is_video
course.header.alt.is_certifyingGot it!
Last updated on 5/27/20
Test Your Knowledge on Linearity, Correlation, and Hypothesis Testing
Evaluated skills
- Understand the Fundamentals of Statistical Modeling
Description
In this exercise, you are going to analyze a new dataset in terms of linearity, correlation, and the statistical significance of the means of certain categories and whether some variables follow a normal distribution.
The dataset is the bike sharing dataset available from the UCI repository. This dataset has daily and and hourly data. We are going to work on the day.csv file which has 731 samples and 16 different variables.
And we are going to focus on the following five attributes:
- Season: Season (1:spring, 2:summer, 3:fall, 4:winter)
- Temp: Normalized temperature in Celsius.
- Hum: Normalized humidity.
- Wind speed: Normalized wind speed.
- CNT: Count of total rental bikes.
You can load the dataset with:
import pandas as pddf = pd.read_csv('day.csv')
And remove the non essential columns with:
df = df[['season', 'temp','hum','windspeed','cnt']]
Question 1
Draw the scatter plots of the variables (use
sns.pairplot(df)from the Seaborn library).
Looking at the scatter plots of the variables, which relation is the most linear looking?
hum vs. temp
cnt vs. temp
wind speed vs. hum
All of the above.
Question 2
Calculate the correlation of the different variables using the Pearson method.
Which pair of variables are negatively correlated with the number of users (cnt)?
season & temp
temp & hum
wind speed & hum
temp & wind speed
Question 3
Consider the correlation of the wind speed versus the other variables.
What can you conclude when there's more wind?Careful, there are several correct answers.
A slight decrease in the number of people biking.
A very important decrease in the number of people biking.
Colder temperatures and less humidity.
A slight increase in the number of users. | https://openclassrooms.com/en/courses/5873596-design-effective-statistical-models-to-understand-your-data/exercises/3456 | CC-MAIN-2021-21 | refinedweb | 322 | 59.9 |
Parent Directory
|
Revision Log
Add entire python directory to SANDBOX_PREDICT, bug 554252.
Forbid also installing "examples" package, bug #555038.
Replace links to python-r1 dev guide with links to the wiki.
Re-apply python-exec:0 removal, now with typos fixed.
Revert random mgorny madness
Remove support for python-exec:0.
Fix for setuptools failures #534058 etc.
Support restricting implementations for *_all() phases.
Verbosely deprecate python_parallel_foreach_impl and DISTUTILS_NO_PARALLEL_BUILD.
Add die-replacements for distutils.eclass functions, to help finding mistakes in conversions.
Always restore initial directory after sub-phase run. Fixes bug #532168 and possibly more.
Restore using separate HOMEs for Python implementations, because of .pydistutils.cfg. Bug #532236.
Disable parallel run support to make things easier for developers and more predictable for users.
eqawarn about /usr/lib/pypy/share instead of dying.
Pass install paths to distutils via setup.cfg.
Support linking Python modules on aix, thanks to haubi.
Attempt to use a UTF-8 locale if one is available to avoid errors when setup.py calls open() with no encoding.
Set LD{,CXX}SHARED properly for Darwin, reported by Fabian Groffen on bug #513664.
unbreak distutils builds on Darwin
Always set up CC, CXX and friends for distutils builds, bug #513664. Thanks to Arfrever for the explanation.
Work around bash-4.3 bug by setting PYTHONDONTWRITEBYTECODE to an empty string.
Fail when package installs "share" subdirectory to PyPy prefix. This should stop people from adding PyPy support to packages that do not work due to the bug in PyPy.
Silence sandbox for /usr/local, bug 498232.
Remove pointless distutils-r1_python_test function.
Override bdist_egg->build_dir via pydistutils.cfg rather than extra command. Fixes bug #489842.
Read all shebangs before moving files to avoid breaking symlinks that are going to be scanned.
Fix distutils-r1_python_install to strip --install-scripts= rather than passing "install" twice to override it. Fixes compatibility with dev-python/paver.
Fix failing to pass default install arguments when user passes an additional command. Reported by radhermit.
Support installing Python scripts with custom --install-scripts argument. Bug #487788.
Do not alter HOME and TMPDIR when single impl is being used. This may work-around bug #487260.
Truncate .pydistutils.cfg in case we call distutils-r1_python_compile more than once.
Use pydistutils.cfg to set build-dirs instead of passing commands explicitly. This should reduce the amount of implicit behavior.
Make HOME per-implementation.
Wrap symlinks installed to PYTHON_SCRIPTDIR as well.
Fixed prefix qa
Fix accepting arguments in distutils_install_for_testing.
Use einstalldocs.
Support python-exec:2.
Clean up Python script install/wrapping functions.
Mark _copy-egg-info as internal.
Copy bundled egg-info files for reuse in python_compile(). This solves issues that caused some of the files not to be installed in broken packages.
Namespace, clean up and describe _disable_ez_setup.
Shout QA warnings when _all() phases do not call the default impls. Bug #478442.
Replace local+export with "local -x".
Stub-out ez_setup.py and distribute_setup.py to prevent packages from downloading their own copy of setuptools.
Set PYTHON_REQUIRED_USE, and add it to REQUIRED_USE in distutils-r1.
Use bash built-ins rather than external tools.
Fix python_*_all() phases with DISTUTILS_SINGLE_IMPL.
Unmask the egg_info block for further testing. Feel free to comment it out if you can reproduce the earlier issues.
Move the egg_info code into a more realistic location for future testing.
Introduce multibuild_merge_root, as an universal interim-install merging function.
Pass --build-platlib and --build-purelib separately to distutils. This allows to change them to different locations if necessary (bug #455332).
Reverse order of $add_args and $@ in esetup.py. Remove duplicate build command from distutils-r1_python_compile.
Use doins instead of dodoc to install examples, due to PMS limitations. Bug #460516.
Use multilib_for_best_variant() for the *_all() phases.
Introduce python_parallel_foreach_impl() and reuse it in distutils-r1.
Run *_all() phases in best-impl sources, in an in-source build.
Override build locations and set PYTHONPATH in in-source builds, to increase compatibility with out-of-source builds.
In-source builds: append "build/" subdir to the BUILD_DIR variable. It can be used alike in out-of-source builds now.
Revert the log teeing changes as they cause unexpected kind of breakage.
Re-use python_parallel_foreach_impl() in distutils-r1.
Introduce the parallel variant of python_foreach_impl().
Support EXAMPLES to install examples in a consistent manner.
Support DOCS=() to disable installing documentation.
Temporarily disable egg_info since it causes problems with installing scripts.
Introduce a function to install package for running tests, solving all the issues with PyPy, setuptools and namespaces.
Override egg-info write location in out-of-source builds.
Error out if "tests" package is installed. This is a common mistake and a source of file collisions.
Support using distutils-r1 along with python-single-r1.
Support making dependency and phase function enforcement optional.
Do not redirect stderr of pushd&popd. Thanks to vapier for catching that.
Fix typo
Provide best implementation's build dir in python_*_all() as BUILD_DIR to make use of esetup.py easier.
Pass arguments to setup.py in an predictable order, especially do not prepend commands before user options.
Support specifying directories in DOCS.
Do not call dummy phases unnecessarily.
Close the lock file explicitly instead of relying on the subshell created by the pipe to tee in distutils-r1_run_phase.
Use locks to avoid race conditions when merging (bug #449760). Use tar instead of cp on FreeBSD due to bug when copying broken symlinks (bug #447370).
Support overriding job-count for parallel build.
Always write split logs, even in non-parallel builds.
addpredict /usr/lib/portage/pym in distutils-r1_src_install.
Remove myself from explicit maintainers, it is enough to assign the bugs to Python team.
Pass --build-scripts path to setup.py (when out-of-source builds are used).
Remove outdated comments and checks.
Add (temporary) fix for sandbox failures when compiling Python modules. Bug #447126.
Fix python-exec symlink generation for Prefix. Thanks to Fabian Groffen for help.
Use separate TMPDIR for each Python implementation.
Report running implementation-common sub-phases verbosely.
Write split build logs for easier debugging.
Use multiprocessing post-fork wait mode to avoid early output when all jobs are running.
Do not die when sub-phases return non-true value. This is inconsistent with normal phase behavior and not really useful since phase functions are supposed to die on their own.
Pass the best Python implementation info to the implementation-common phase functions.
Support parallel builds using multiprocessing eclass.
Create the wrapper symlinks directly in _distutils-r1_rename_scripts rather than postponing it to distutils-r1_install_all.
Use intermediate-root install.
Make distutils-r1_rename_scripts private. Rename all matching executables recursively in given path(s) rather than using hardcoded path list.
Run EXPORT_FUNCTIONS even if re-inheriting, to preserve the expected phase overrides.
Fix EAPI checks, add double- and colliding include guards.
Fix enabling byte-compiling.
Export PYTHONPATH for phases in out-of-source builds.
Explicitly set library build dir in out-of-source builds.
Enable byte-compilation of Python modules only locally for distutils-r1_python_install(). Thanks to Enlik for reminding me of it.
Support and use out-of-source builds by default.
Introduce an esetup.py wrapper function and mydistutilsargs=() for it.
Remove redundant "cd ${BUILD_DIR}" calls.
Move python-exec dependency to python-r1. That eclass now provides means to create versioned scripts as well.
Use find instead of hard-coded executable locations list when linking the wrapper.
Use new python-r1 functions.
Do not enter BUILD_DIR in python_foreach_impl(), do that in distutils-r1 instead.
Add games/bin to lookup paths for rename_scripts().
Introduce python_export() to set Python-relevant variables, and document them better.
Improve documentation and a few minor fixes.
Call no-op default phases for each implementation (meaningless but more correct).
Enable EAPI 5 support.
Fix missing wrapper symlinks when first supported Python implementation is disabled.
Introduce distutils-r1, a new (and simpler) eclass for Python packages using distutils build system.
This form allows you to request diffs between any two revisions of this file. For each of the two "sides" of the diff, select a symbolic revision name using the selection box, or choose 'Use Text Field' and enter a numeric revision. | https://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo-x86/eclass/distutils-r1.eclass?sortby=log&view=log | CC-MAIN-2015-48 | refinedweb | 1,345 | 54.18 |
}
}
I'm guessing you're confusing between = and ==?
== is a comparison operator, whereas = is a substitution operator. Try to use comparison in the if conditional.
Besides you could have said
var image_height = img.height;
insead of
var image_height = document.getElementById('resizing').height;
Another advice. It is always a good practice to put a semi-colon after each statement.
well ive done it may way coz i want the script to affect only the elements with the id 'resizing'... thanks a lot ill try ur suggestion and report back
nope... didnt work.. look here is the link of the site and u already know the scriptcode.. (image the borders visible to see whats going on :] )
I don't think you understand what was suggested and why.
var img = document.getElementById('resizing');
var image_height = document.getElementById('resizing').height; // this is not needed
var image_height = img.height; // this is what you need.
the object image when accessing a property of that object like img.height is only the same as saying document.getElementById('resizing').height;
The point I am making is that you already have "Grabbed" an instance of the object, pointless duplication of an objects information.
JunkMale;1182438 wrote:I don't think you understand what was suggested and why.
var img = document.getElementById('resizing');
var image_height = document.getElementById('resizing').height; // this is not neededvar image_height = img.height; // this is what you need.the object image when accessing a property of that object like img.height is only the same as saying document.getElementById('resizing').height;
hmm ok but then how do i refer only to the images with the id #resizing?
You can only refer to ONE id as ALL ID's have to be UNIQUE
It is NAME tags in elements that can share, your alternative would be to use the getElementsByClass workaround and give only those elements you want changes a class name and an empty entry in the css table.
ok ive changed it the way u said but still not working.. oh and ive tried img.height instead of im.style.height but still didnt work..
function resize_images()
{
var img = document.getElementById('resizing');
var window_height = document.body.clientHeight;
var image_height = image.height;
if(image_height == window_height / 100 70);
{
img.height = window_height / 100 70;
}
}
your posting only your script.
we have no HTML to associate with your function.
we have no way of knowing how you have implemented your script and code.
its img.height not image.height
function resize_images()
{
var [B][COLOR="Red"]img[/COLOR][/B] = document.getElementById('resizing');
var window_height = document.body.clientHeight;
var image_height = [COLOR="red"][B]image[/B][/COLOR].height; // <---- Pardon me!
if(image_height == window_height / 100 * 70);
{
[COLOR="red"][B]img[/B][/COLOR].height = window_height / 100 * 70;
}
}
So please post your HTML so we can see what it is thats causing the function to not work in addition to the changes in the code as indicated.
oh damn it worked then i changed image for img but only for the first pic.. as u mentioned every object should have a unique id.. so would giving a class name instead of id work?
here is my html: (i posted the link of the page so u could see the code but i guess u didnt bother)
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns="">
<head>
<meta http-
<script type="text/javascript" src="js/resize.js"></script>
<link rel="stylesheet" type="text/css" href="stl/stl.css" />
<title>SENOL KOC '12</title>
</head>
<body onload="resize_images()">
<table id="siteContainer" cellpadding="0" cellspacing="0">
<tr>
<td id="header">
<div id="menuContainer">
<a href="index.php" target="_top"><img id="title" alt="Senol Koc" src="pix/senol.png" /></a>
<br /><br />
<div id="menu">
<div id="buttonContainer1">
<div id="button1" style="background-color:#666;"> </div>
<div id="menuButton1"><a href="beauty.php">Beauty</a></div>
</div>
<div id="buttonContainer2">
<div id="button2"> </div>
<div id="menuButton2"><a href="fashion.php">Fashion</a></div>
</div>
<div id="buttonContainer3">
<div id="button3"> </div>
<div id="menuButton3"><a href="celeb.php">Celebrity</a></div>
</div>
<div id="buttonContainer4">
<div id="button4"> </div>
<div id="menuButton4"><a href="portrait.php">Portraits</a></div>
</div>
<div id="buttonContainerSpace">
<div id="menuSpaceThin"> </div>
<div id="menuSpace"> </div>
</div>
<div id="buttonContainer5">
<div id="button5"> </div>
<div id="menuButton5"><a href="contact.php">Contact</a></div>
</div>
</div>
</div>
</td></tr>
<tr><td id="galleryContent">
<table id="gallery" cellpadding="0" cellspacing="0"><tr>
<?php
$files = glob("beauty/*.jpg");
for ($i=1; $i<count($files); $i++)
{
$num = $files[$i];
echo '<td><img id="resizing" src="'.$num.'" alt="random image"></td>';}
?>
</tr>
</table>
<td></tr>
<tr><td id="footer">
<div id="galleryFooter">
<span id="site"><a href="index.php"></a></span><span id="rights">© 2012 ®Tum haklari saklidir.</span>
</div>
</td></tr>
</table>
</body>
</html>
This bit in your PHP
<img id="resizing" src="'.$num.'" alt="random image"></td>
the id could be changed to name and you use the document.getElementsByName which means you could then do...
<img name="resizing" src="'.$num.'" alt="random image"></td>
and BTW...
<img id='resizing' src='{$files[$i]}' alt='random image'></td>
is easier to read and open your strings and close string with " first and inner quotes of ' and where you need to have a value, as illustrated, the $num is replaced directly with $files[$i]
Then when you got your PHP pumping out code correctly, you can then worry about the javascript as its pointless dealing with that until the underlying HTML is working or properly formatted.
ClassName nor Name worked.. but ById worked (only for the first img) and what u said about php stopped my script it was working fine the original way..
Search the internet for the function getElementByClass and also the getElementByName
With the "Class" you use this with an empty stylesheet entyr, call it imagesize, then you apply the class="imagesize" to the <img tag you generate, then the getElementByClass will pull all elements with the class name of "imagesize" ready for your script to manipulate.
With the getElementsByName will pull all elements by the name tag.
Again, we have no script to see how you implemented the changes, so its stabbing in the dark time again, talking of which, I have been up since 4am and its coming up to midnight and I am offski to bedski.
I suggest that you understand how to use these functions and the importance of getting your HTML output error free first then worry about the javascript.
You have herd of the term / expression "Running before you can walk", well this is just that scenario, your transfixed by the javascript end of the programming which is pointless unless you have the underlying HTML sorted out.
thnx a lot for the effort man.. ive also tried document.getElementById('gallery').getElementsByName('resizing'); but didnt work. i havent made any other changes to the html and it doesnt have any errors.. i validated it... sleep well
wrong, you don't use it like that, it is either...
document.getElementById
or
document.getElementByName
document.getElementByNameTag
document.getElementByClass
where the "Class" one you need to provide an empty CSS element that you use as a marker in the page.
and with that, I am going to spend the next several hours examining the inside of my eyelids.
well mozilla developer network said it should be like that
ok man gnite say hi to ur eyelids
Ok, back on with this...
If you use any of the methods to access the DOM Tree data, all returns will have properties that you can access that relate to the object type you have returned in the function.
img = document.getElementsByName/B; will grab all the elements on the form with a name tag name="thisElement" set. eg...
<img name="thisElement" alt="An Image" src="1.jpg" >
So between getElementsByName and getElementsByClass, I would use the getElementsByName and set up the name in all the elements you want to access or manipulate.
Then you access the properties from results of that function.
So in the case of an image object, all you would do is grab the element, access the properties and make amendments as needed and move on to the next element. The code below roughly summerises the function that gets Data by finding the actual elements you want
function getTheData( str ){
results = [];
x = document.getElementByName(str);
for(res=0; res<x.length; res++){
// do we have a match
if( x[res].name == str){
results.push( x[res] ); // store the object
}
}
return (!x || x.length==0)? false : x; // now return the results or a control
}
so when you call getTheData('resizing'); on your page, assuming that the HTML has neeb sorted out, would return all elements in the DOM tree with a tag element that has a name of 'resizing'.
You then iterate the returned results array and access each objects properties and amend as needed.
Like I pointed out previously, UNLESS your HTML output us fully functioning, no matter what you try script wise, the results could be unpredictable or not return anything or the data may be useless. So you need to have your HTML validated to ensure you have no errors. When the HTML is properly generated, you then concentrate on the JAvaScript side of things. | https://www.webdeveloper.com/forum/d/254126-need-urgent-help-on-a-small-script | CC-MAIN-2018-22 | refinedweb | 1,548 | 58.48 |
New in version 0.9.2.
Functions for wrapping strings in ANSI color codes.
Each function within this module returns the input string text, wrapped with ANSI color codes for the appropriate color.
For example, to print some text as green on supporting terminals:
from fabric.colors import green print(green("This text is green!"))
Because these functions simply return modified strings, you can nest them:
from fabric.colors import red, green print(red("This sentence is red, except for " + green("these words, which are green") + "."))
If bold is set to True, the ANSI flag for bolding will be flipped on for that particular invocation, which usually shows up as a bold or brighter version of the original color on most terminals. | http://docs.fabfile.org/en/1.4.3/api/core/colors.html | CC-MAIN-2014-10 | refinedweb | 122 | 65.62 |
On 05/05/2011 05:25 AM, Michal Privoznik wrote: > Users often edit XML file stored in configuration directory > thinking of modifying a domain/network/pool/etc. Thus it is wise > to let them know they are using the wrong way and give them hint. > --- > diff to v1: > - instead of pointing users to web, write down the actual virsh command > - write to passed FD instead of buffer My apologies for not noticing sooner, but... > int virDomainSaveXML(const char *configDir, > virDomainDefPtr def, > - const char *xml) > + const char *xml, > + unsigned int flags) > { > char *configFile = NULL; > int fd = -1, ret = -1; > @@ -8510,6 +8511,9 @@ int virDomainSaveXML(const char *configDir, > goto cleanup; > } > > + if (flags & VIR_XML_EMIT_WARNING) > + virEmitXMLWarning(fd, def->name, VIR_XML_EDIT_COMMAND_DOMAIN); Every last caller to this function is now passing the same flag, so the flags argument is redundant. I thought we might have a difference where sometimes we need it and sometimes we don't, but I don't see any cases where we don't. I'd prefer a v3 that touches fewer lines of code by not adding the extra flags argument, but just unconditionally calling virEmitXMLWarning here... > +++ b/src/util/util.c > @@ -3207,3 +3207,53 @@ bool virIsDevMapperDevice(const char *devname ATTRIBUTE_UNUSED) > return false; > } > #endif > + > +VIR_ENUM_IMPL(virXMLEditCommand, VIR_XML_EDIT_COMMAND_LAST, > + "", Why do we need the "" entry? > + "edit", > + "net-edit", > + "nwfilter-edit", > + "pool-edit") > + > +int virEmitXMLWarning(int fd, > + const char *name, > + unsigned int cmd) { > + size_t len; > + const char *cmd_str = virXMLEditCommandTypeToString(cmd); > + const char *prologue = _("<!--\n\ > +WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE \n\ > +OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:\n\ > +virsh "); Do we really want these strings translated, or are they okay always being in English? The rest of the xml file is locale independent, and I'm worried that if we put in a translated message, that a translator might not form a well-formed xml comment in their translation, which breaks things. Furthermore, this is in a root-accessible file, much like libvirtd.conf or qemu.conf, where we aren't translating any comments in those files. > +++ b/src/util/util.h > @@ -299,4 +299,24 @@ int virBuildPathInternal(char **path, ...) ATTRIBUTE_SENTINEL; > char *virTimestamp(void); > > bool virIsDevMapperDevice(const char *devname) ATTRIBUTE_NONNULL(1); > + > +enum { > + VIR_XML_EMIT_WARNING = (1 << 0), > +}; Since there was no one in your patch that passed 0, we don't need this flag. > + > +enum virXMLEditCommand { > + VIR_XML_EDIT_COMMAND_UNKNOWN = 0, Since this command is completely internal, we should never be passing it a bad command, and can start directly with domain. But do we even need an enum, or can we just pass a const char * and save ourselves the effort of a lookup in the first place? -- Eric Blake eblake redhat com +1-801-349-2682 Libvirt virtualization library
Attachment:
signature.asc
Description: OpenPGP digital signature | http://www.redhat.com/archives/libvir-list/2011-May/msg00241.html | CC-MAIN-2014-52 | refinedweb | 466 | 51.48 |
The previous post provided some code for generating Poisson disc noise (in which no point the sample is closer than some fixed distance from any other). Here is a short program to calculate the power spectrum of this noise and compare it with the spectrum for the same number of points drawn from a uniform distribution.
Top: Poisson disc noise (left) and uniform noise (right) and their power spectra (bottom). As expected, the spectrum of the uniform noise is featureless, whereas the spectrum for Poisson disc noise has concentric circles around a blank disc.
The code below uses a version of the Poisson Disc algorithm that I have cast into object-oriented form (also given below).
This code is also available on my github page.
import numpy as np import matplotlib.pyplot as plt from poisson import PoissonDisc class UniformNoise(): """A class for generating uniformly distributed, 2D noise.""" def __init__(self, width=50, height=50, n=None): """Initialise the size of the domain and number of points to sample.""" self.width, self.height = width, height if n is None: n = int(width * height) self.n = n def reset(self): pass def sample(self): return np.array([np.random.uniform(0, width, size=self.n), np.random.uniform(0, height, size=self.n)]).T # domain size, minimum distance between samples for Poisson disc method... width = height = 100 r = 2 poisson_disc = PoissonDisc(width, height, r) # Expected number of samples from Poisson disc method... n = int(width * height / np.pi / poisson_disc.a**2) # ... use the same for uniform noise. uniform_noise = UniformNoise(width, height, n) # Number of sampling runs to do (to remove noise from the noise in the power # spectrum). N = 100 # Sampling parameter, when putting the sample points onto the domain M = 5 fig, ax = plt.subplots(nrows=2, ncols=2) for j, noise in enumerate((poisson_disc, uniform_noise)): print(noise.__class__.__name__) spec = np.zeros((height * M, width * M)) for i in range(N): print('{}/{}'.format(i+1, N)) noise.reset() samples = np.array(noise.sample()) domain = np.zeros((height * M, width * M)) for pt in samples: coords = int(pt[1] * M), int(pt[0] * M) domain[coords] = 1 # Do the Fourier Trasform, shift the frequencies and add to the # running total. f = np.fft.fft2(domain) fshift = np.fft.fftshift(f) spec += np.log(np.abs(fshift)) # Plot the a set of random points and the power spectrum. ax[0][j].imshow(domain, cmap=plt.cm.Greys) ax[1][j].imshow(spec, cmap=plt.cm.Greys_r) # Remove axis ticks and annotations for k in (0,1): ax[k][j].tick_params(which='both', bottom='off', left='off', top='off', right='off', labelbottom='off', labelleft='off') plt.savefig('periodograms.png')
The class
PoissonDisc is defined in the file
poisson.py given here:
import numpy as np import matplotlib.pyplot as plt class PoissonDisc(): def __init__(self, width=50, height=50, r=1, k=30): self.width, self.height = width, height self.r = r self.k = k # Cell side length self.a = r/np.sqrt(2) # Number of cells in the x- and y-directions of the grid self.nx, self.ny = int(width / self.a) + 1, int(height / self.a) + 1 self.reset() def reset(self): """Reset the cells dictionary.""" # A list of coordinates in the grid of cells coords_list = [(ix, iy) for ix in range(self.nx) for iy in range(self.ny)] # Initilalize the dictionary of cells: each key is a cell's coordinates # the corresponding value is the index of that cell's point's # coordinates in the samples list (or None if the cell is empty). self.cells = {coords: None for coords in coords_list} def get_cell_coords(self, pt): """Get the coordinates of the cell that pt = (x,y) falls in.""" return int(pt[0] // self.a), int(pt[1] // self.a) def get_neighbours(self,] < self.nx and 0 <= neighbour_coords[1] < self.ny): # We're off the grid: no neighbours here. continue neighbour_cell = self.cells[neighbour_coords] if neighbour_cell is not None: # This cell is occupied: store the index of the contained point neighbours.append(neighbour_cell) return neighbours def point_valid(self, pt): """Is pt a valid point to emit as a sample? It must be no closer than r from any other point: check the cells in its immediate neighbourhood. """ cell_coords = self.get_cell_coords(pt) for idx in self.get_neighbours(cell_coords): nearby_pt = self.samples[idx] # Squared distance between candidate point, pt, and this nearby_pt. distance2 = (nearby_pt[0]-pt[0])**2 + (nearby_pt[1]-pt[1])**2 if distance2 < self.r**2: # The points are too close, so pt is not a candidate. return False # All points tested: if we're here, pt is valid return True def get_point(self, refpt): """Try to find a candidate point near < self.k: rho, theta = (np.random.uniform(self.r, 2*self.r), np.random.uniform(0, 2*np.pi)) pt = refpt[0] + rho*np.cos(theta), refpt[1] + rho*np.sin(theta) if not (0 <= pt[0] < self.width and 0 <= pt[1] < self.height): # This point falls outside the domain, so try again. continue if self.point_valid(pt): return pt i += 1 # We failed to find a suitable point in the vicinity of refpt. return False def sample(self): """Poisson disc random sampling in 2D. Draw random samples on the domain width x height such that no two samples are closer than r apart. The parameter k determines the maximum number of candidate points to be chosen around each reference point before removing it from the "active" list. """ # Pick a random point to start with. pt = (np.random.uniform(0, self.width), np.random.uniform(0, self.height)) self.samples = [pt] # Our first sample is indexed at 0 in the samples list... self.cells[self.get_cell_coords(pt)] = 0 # and it is active, in the sense that we're going to look for more # points in its neighbourhood. active = [0] # As long as there are points in the active list, keep looking for # samples. while active: # choose a random "reference" point from the active list. idx = np.random.choice(active) refpt = self.samples[idx] # Try to pick a new point relative to the reference point. pt = self.get_point(refpt) if pt: # Point pt is valid: add it to samples list and mark as active self.samples.append(pt) nsamples = len(self.samples) - 1 active.append(nsamples) self.cells[self.get_cell_coords(pt)] = nsamples else: # We had to give up looking for valid points near refpt, so # remove it from the list of "active" points. active.remove(idx) return self.samples
Comments are pre-moderated. Please be patient and your comment will appear soon.
There are currently no comments
New Comment | https://scipython.com/blog/power-spectra-for-blue-and-uniform-noise/ | CC-MAIN-2019-51 | refinedweb | 1,103 | 61.22 |
]
vaccess — generate an access control decision using vnode parameters
#include <sys/param.h>
#include <sys/vnode.h>
int
vaccess(enum vtype type, mode_t file_mode, uid_t file_uid,
gid_t file_gid, accmode_t accmode, struct ucred *cred,
int *privused);() in order to perform the actual check.
Implementations of VOP_ACCESS(9) may choose to implement additional
security mechanisms whose results will be composed with the return value.
The algorithm used by vaccess() selects a component of the file
permission credential UID or GIDs match the passed file owner and group, then
the other component of the permission bits is selected..
vaccess() will return 0 on success, or a non-zero error value on failure.
.
vaccess_acl_nfs4(9), vaccess_acl_posix1e(9), vnode(9), VOP_ACCESS(9)
This manual page and the current implementation of vaccess() were written
by Robert Watson.
All copyrights belong to their respective owners. Other content (c) 2014-2017, GNU.WIKI. Please report site errors to webmaster@gnu.wiki.Page load time: 0.195 seconds. Last modified: November 09 2017 18:38:06. | http://gnu.wiki/man9/vaccess.9freebsd.php | CC-MAIN-2018-22 | refinedweb | 167 | 50.53 |
#include <netinet/in.h>
int
socket(AF_INET, SOCK_STREAM, 0);-
ers small amounts of output to be sent in a single packet once an
acknowledgement is received. For a small number of clients, such as win-
dow systems that send a stream of mouse events which receive no replies,
this packetization may cause significant delays. Therefore, TCP provides
a boolean option, TCP_NODELAY (from <netinet/tcp.h>, to defeat this algo-.
A socket operation may fail with one of the following errors returned:
[EISCONN] when trying to establish a connection on a socket which
), intro(4), inet(4), ip(4)
The tcp protocol stack appeared in 4.2BSD.
4.2 Berkeley Distribution June 5, 1993 4.2 Berkeley Distribution | http://www.syzdek.net/~syzdek/docs/man/.shtml/man4/tcp.4.html | crawl-003 | refinedweb | 117 | 56.15 |
Can any one of you help me in converting an windows dll file in a .so
file.
I am trying to convert my sql query to hql. details are below:I am
getting the domain objects from a list and doing executeQuery on
that..So
def sTblList =
this.getMonths(SfromDate,StoDate) sTblList.each{
def OnemonthMap=it.executeQuery("select sum(stxntbl.actioncount) as t_cnt,
stxntbl.eventdesc as event_desc from "+it.getSimpleName()+" stxntbl have a System.Drawing.Image screenshot file.I cast it to bmp,
but the problem is that it makes a 32 bit bmp, while I need a 24 bit one.
How can I convert it to 24?
I am trying to take some xml code, a sample being below:
<time_report> <project_ID>4</project_ID>
<project_status>close</project_status>
<client_ID>6001</client_ID> <time_record>
<project_start_time>15:02:33</project_start_time> <project_end_time>15:07 | http://bighow.org/tags/converting/1 | CC-MAIN-2017-47 | refinedweb | 139 | 53.58 |
I was reading this unfinished article on Jython (part of what looks to be very a good but also incomplete series of articles on JVM languages) and he had a few criticisms/suggestions of Python. I've mentioned some minor things that annoy me before, but this made me want to collect them all, and list all the things I wish Python did. Python 3K is already taken, but I'll up the ante -- Python 4K will be the 33% better than even Python 3K. (Note: implementation of Python 4K is left an an exercise for the reader.)
This isn't all backward compatible, but that said I'm not inclined to suggest needless backward incompatibilities either. Some people seem allergic to any "bloat" in a language, and obsess over removing things from builtins and elsewhere. I find this silly. Some people need to learn how to ignore what they don't care about.
So, what's my list?
Strings would not be iterable (noted here).
The self in method signatures would become optional. self would be a keyword with special significance. I regularly have bugs because of this -- mostly because I accidentally add self when calling methods on self, not because I leave them out of the function definition. But still, I find this lame.
cls would similarly become a keyword -- or maybe even allow class -- for class methods. For symmetry as much as anything.
super would become a keyword with special significance. I find super really confusing to use.
Maybe class methods wouldn't be callable on instances. But class variables should stay visible through instances. That seems tricky at first, but I actually think it's just a matter of a slightly different classmethod implementation.
We'd get some kind of declarative/lazy language extension, maybe like this. That turns class into one of several possible constructor arguments (property being a notable second such addition). Interface declarations could also be done more cleanly this way. This means that:
interface IFoo(Bar): a = 1
Means IFoo = interface('IFoo', (Bar,), {'a': 1}). You can kind of do that right now with metaclasses, but it feels very hacky, and Bar has to be designed for it. Something more conservative might also be just fine. It annoys me that attributes are unordered through these techniques. But maybe not worth fixing.
object grows a __classinit__ method, as described in a conservative metaclass.
Something to handle the descriptor problem, where descriptors don't know about what they are attached to. Having tried several things related to this, handling subclassing is hard, but I think it's important enough to make happen even if there are some inevitable warts.
There would only be bytes and unicode, no str, none of these Unicode*Errors. I hate those so much. Everyone does. And none of this "unicode is just intrinsically hard" crap. That's what slackers say about hard things. Python 4K will be a stand-up language, not some slacker language!
I'll admit this is a big backward compatibility issue. But unicode has already introduced huge backward compatibility problems, but certain people aren't willing to admit it, and instead just call the issues "bugs". Yeah, there's more bugs than non-bugs in the standard library, so I'm unimpressed.
Maybe something to clean up magic methods, like def *(other):. This wouldn't clean up all magic methods, but would clean up several. And right now, you actually can add a '*' method to objects. Maybe def '*'(other)? Not a big deal; magic methods are magic anyway, whatever name you give them.
Some query syntax for LINQ-like expressions -- that is, expressions which are introspectable and not run immediately. SQLObject's SQLBuilder (similar things seen elsewhere) is similar, but all hacky. Stuff like this and this would be easier and cleaner with it.
Some order-by syntax. Then you could do [value for key, value in some_dict.items() orderby value]. Right now you have to do [value for key, value in sorted(some_dict.items(), key=lambda v: v[1])] and that's obviously not as nice. Nice for the queries too.
Some string interpolation. Probably $"something ${expression}", ala PEP 215.
Some resolution to the anonymous function thing. I agree completely with Guido -- anonymous functions and function-expressions beyond lambda don't fit in Python. I could care less if lambda goes away -- I just want something good and lambda isn't good (I'm not a minimalist). I haven't figured out exactly what I want. I would probably defer to the brainstorming of the Twisted guys and other people who frequently bang their heads on this.
Some way of dealing with reloading. Maybe just cleaning up reload() (making it overwrite classes instead of recreating them, for instance). If that fancier reloading is part of the language, then people might be more careful about working so they don't break it. There would have to be an option of module-level functions __before_reload__ and __after_reload__ or something, so that even tricky code could fix itself up.
Something for object initialization. I find this tedious:
class Foo(object): def __init__(self, bar, baz): self.bar = bar self.baz = baz
But I'm not actually sure what to do. I mentioned that here.
I think double-underscore class-private variables (like self.__private) are super-lame. People think they are something they are not. They are not really-truly-private variables. They are I'm-going-to-piss-people-off private variables. They are I'm-too-lazy-to-use-name-mangling private variables. They are I'm-going-to-break-subclassing private variables. If you want class-private variables for a valid reason, explicit name mangling is in all ways better.
Delegation using __getattr__ and __setattr__ can be hard to work with, both creating delegates and consuming them. I'd like if there was more explicit support for this, with better failure conditions. Maybe this just means a nice built-in metaclass that solves this problem particularly well.
Something better for forward-references. I don't know exactly what to call these -- except it means for references to things that don't exist yet. Like a reference to a class that may or may not exist yet. Maybe this could be handled with a descriptor or something. Give it a specific implementation and pattern, so people can use it more easily than having to build the pattern up from what's there currently.
There's probably some interpreter things I'd like too...
Something like Pyrex but all polished and pretty. Maybe RPython can be this.
Microprocesses -- I haven't really used Erlang much, but I'm still drawn to it more than any of the other languages in that branch (the weird functional branch -- Haskell, ML, etc). They don't have microthreads (as you might see in Stackless), but they are basically the same except without shared state. Messages only. I think that's so much cooler than microthreads, at least if you can pull it off and still be fast and reliable.
Really fast interpreter startup. That might make the reloading thing much less important. I imagine frozen modules -- freezing the actual module in its full glory just as it looks immediately after import. With copy-on-write in some fashion, to facilitate really fast microprocess startup too (so microprocesses can share some set of modules so long as they aren't written to).
The first part (freezing) seems not-that-hard right now. In theory you just put pickle and marshal together and you got it, so long as the module doesn't do anything environment-dependent on startup (modules could opt in or opt out or something with respect to their environment dependencies). Copy-on-write is probably much much harder; though maybe it's an extension of garbage collection algorithms.
Really good cross-platform intra-Python interprocess communication. This should look just like inter-microprocess communication, for process portability.
Traceback extensions, so you can better annotate your code in ways that will show up in tracebacks. Similar in spirit to what Zope does with __traceback_info__ (and what Paste copies). But a built-in convention.
Some improved exception messages. I hate signature errors, for instance; very unhelpful (especially with the implicit-self-related counting error). Python does pretty well -- lots of languages do a lot worse -- but I also think this is way up there in terms of usability and can always use improvement.
No sys.path crappiness. Think this stuff through, one right way, no "search paths" (implicit awfulness). Careful or explicit relative imports.
A shortcut for for item in seq: yield item, when doing generator pass-through. Maybe yield *seq. Saw this on python-dev already. As I use generators more I encounter this desire fairly frequently.
Tuple-unpacking, like first, *rest = path.split('/'). This would be mighty useful. The guy who wrote the Jython review raves about Python packing; this would be another fine enhancement to a delightful feature. I desire this frequently.
Better signature unpacking, allowing def foo(*args, bar=None). I think I saw something recently about this in python-dev, and it might happen before 3.0..
One thing I noticed from some recent Ruby criticisms I've read is that its syntax errors can be hard to read -- and then I realized this is in part because you have to wait until the end of the file for the compiler to catch some syntax errors that a Python can catch immediately because of whitespace.
Continuation-based web frameworks (like Seaside) seem lame to me. I'm so much more impressed by event-based programming, and continuations (used in that way) seem like a way to avoid explicit events. Events are the web. Continuations are people who want to make the web into an MVC GUI. GUIs are so twentieth century. Grow up!
Language-level continuations (as opposed to the explicit continuation style used by Twisted) are cool and all -- I don't think they hurt -- but I don't think they are actually very useful either. That you can implement other things in terms of them doesn't impress me either; I'm also pretty comfortable with a series of specific language features instead of one giant meta-feature, at least when it comes to control flow.
I like the idea of adding to the ways you can express data in Python (it's not bad now). But actual macros don't really interest me (in a general sense) as I think they work on the wrong level -- they deal with syntax transformations, as opposed to object-level transformations. A fairly conservative set of additions to the syntax (I note queries and the interface/etc extension above) are better than macros.
Global interpreter lock? I still don't care, sorry. I want better processes, not better threads..
self in method bodies would not become optional. I.e., you'd still talk about self.instance_variable. That's a brevity that I don't want. Ruby's @ is fine (but doesn't fit Python's aesthetic). But the implicit this is horrible, and unworkable in Python anyway. People often clump signature-self whining together with body-self whining. One of those groups has a point, the other group doesn't. Anyway, I think the complaining would go away entirely if signatures and super was fixed, because people only complain about method bodies because think it adds weight to their argument against signatures (even though I think the opposite is true).
I actually like ? and ! in function names, as in Scheme. But I don't think it is worth adding.
I also kind of like no-public-attributes (i.e., all outside access is explicitly identified). property() helps a lot, but isn't all I would like.
The lexical scoping problems people have don't bother me. I wouldn't want Javascript-like variable declarations to define scope. It would be nice if Python gave an error earlier for scoping mistakes, though, and a better error too. I think this is statically detectable. Maybe something in addition to just the global declaration would be useful. People ask for lexical, and I can certainly see the benefit. (Note: this is only an issue when assigning to variables in enclosing scopes)
Well, that's all for now. Now you know everything I think is wrong with Python (that I can think of right now), and most of what I think would make it right. Looking over it again, I actually don't think it's that pie-in-the-sky, though some parts (like self) are indeed tricky, and involve more than just syntactic additions.
Update:
Well, I don't really need, or understand deeply, all the advanced/dynamic/metaclass features you're asking for, so I'll comment on the ones I care about.
Definite agreement:
- non-iterable strings
- Object initialization
- Microprocesses. This is the holy Python grail for me. They are so sweet in Erlang.
- Better tuple unpacking. Another win for Erlang that we could steal and improve. I like your examples.
- Same for signature unpacking.
- Explicit relative imports.
- orderby.
- Better reload().
- LINQ-like stuff.
- Unicode - my number 2 behind microprocesses.
Questions:
- You just say that __variables__ are lame, and mangling is better, without ever saying what you'd do about them. How does Python 4k handle private variables/name mangling properly?
- Why aren't "?" and "!" worth allowing in function names? I want these, personally. I feel like they would aid the descriptiveness of my code significantly. (How about "-"?)
I'd add (and I don't follow python-dev, so forgive me if they exist/are coming soon):
- Built-in currying
- local variable dump in exceptions, like in cgi module
- Microprocesses. Oh, did I mention that one already?
- ipython in the standard distribution
- Real, Good, Pythonic, Documented, Cross-Platform GUI library. We're talking 4k here right? Can we ever make this happen?
- By class-private variables I mean __private, not __magic__. Class-private variables are simply never needed. If you have self.__really_private you can just use self._MyClass_really_private for similar effect (actually self._MyClass__really_private is the identical effect)
- I think adding ? and ! to function names (or symbol names in general) changes the feel of the language too much. I'd add them if I was starting from zero, but at this point I wouldn't want to use those because it would be too jarring against the bulk of existing code. Dashes are, of course, completely out ;) Also, they conflict with outside naming conventions, making some Python symbols hard to access from other languages. I don't know if this is a big deal in practice, but I think there's something good about using a lowest-common-denominator in naming. Case sensitivity is useful for the same reason (though part of me would like to be case and underscore insensitive, to remove the source of needless style differences).
For the things you would add:
- There is a proposal for a partial function, which I think is what you mean by currying. Maybe in Python 2.5? I personally don't think it's very readable in practice. lambda actually not half bad for this one case.
- I agree that a nicer interactive prompt -- roughly based on ipython I imagine -- would be very useful.
- Local variable dumping is possible currently, but not enabled in the default traceback. I don't know if it should be -- I think it's overwhelming except in interactive environments. It should be included in more interactive environments; maybe standard library support would help. Goes with the __traceback_info__ stuff.
- GUI library... seems too hard, even for 4K ;) If it hasn't happened outside of the standard library, how can it happen inside? Plus GUIs are the past, the web is the future ;) XUL will rock, though! I'll be all over that.
Bill: Check out PEP 309 for the currying/partial functions thing.. According to the PEP, it'll be in Python 2.5.
PartialApplication is officially in for 2.5 I cannot resist pointing our my recipe here:
It's sortof PA++ ;)
I very much want to see 'locals' and 'globals' be sort of accessible keywords much like you suggest 'super' and 'self', and also 'class'. It'd make dealing with the various name-spaces a cinch. Or maybe 'module' instead of 'globals'.
That'd be nice.
Why Python 4k ? Since most of this is not backward compatible, I would be very thanksfull if you began making PEP's for it all.
Personnally, don't understand all the stuff there but I vote for :
- really fast interpreter startup
- turns class into one of several possible constructor arguments
- order by syntax
- microprocesses (don't yet read about it but sounds cool).
About your assertion that GUI is old-style, I'm not totally convinced but I really think, GUI should be able to use bookmarks, protocols and REST style programming. What I see as a limitation in web style programming is the lack for "push" facilities.
Thanks for this wonderfull post.
The GUI stuff was really my slightly facetious rejection of those people who complain about how inelegent web programming is, and how it should be more like GUI programming. I think they are just opening a can of worms for themselves, by trying to impose a style of programming they are familiar with to a domain where it is not appropriate. Of course, I also think GUI programming is a bit retro all considered -- I genuinely think that as a user, not just as a programmer. But that only directs my interest, it doesn't really effect the language much one way or the other.
Some of these aren't language extensions per se, but just hard implementation problems: microprocesses are an example of that, as is interpreter startup. I am hopeful that PyPy will open up this kind of development.
Good Ideas but please let strings yield characters on iteration :-) I've much code using string iteration.
And self/cls should'nt be keywords and should be the first parameter of method. Really :)
String iteration just doesn't make sense. They don't yield characters... they just yield smaller strings. Strings that contain strings that contain strings. Strings just aren't containers in Python -- they are indexable, but to iterate something should be a container. It's only (IMHO) an accident of an earlier Python that didn't have the iterator protocol, that you can iterate over strings. Strings should grow a method like .chars() that yields all the characters. Of course, not backward compatible, but the errors should be obvious ("TypeError: strings are not iterable").
I've been thinking about the self thing more. It's one of the hardest to resolve issues (and I doubt it will be resolved). But I think it would be reasonable if functions defined in a class scope meant something different than a function defined outside of a class scope. So if you were injecting a function definition into a class, it would (reasonably) work just like it does now, with an explicit self parameter. But if you are doing it in a class scope, the metaclass (type) wraps the function in some fashion. It seems less intrusive than class-private variables, even (which are statically determined).
The self parameter also works badly with decorators, though the same might be true if that parameter was removed. Getting a decorator to act properly with a method and with a function is hard.
I).
I.
My personal feeling is that a multi-line lambda is a mistake, like Guido says. However, I agree with the functionally oriented people that having to use a local named function breaks code locality and tends to make it hard to follow the code in Python written with a more functional flavor.
I think that the right Pythonic answer is to introduce a syntax kind of like ruby blocks where a function call can optionally be followed by a colon, then arg list and a suite underneath and that the arguments and suite would be converted to a local function passed as the last argument to the function. Here is an example:def open_in( filename, block ): f = open( filename ) try: return block( f ) finally: close(f) max_char = open_in( "foo.txt" ): (f) x = 0 for l in f: x = max(x, len(l)) return x
Or something like that. There are a few warts:
- The arg list of the anonymous block is awkward, maybe another colon at the end would help.
- The block is a lexical closure, not in the same frame as the outside function so you get the usual assignment to local problem, but its worse because it looks like a suite in the local scope the same way for and while suites are. That makes its more confusing.
- In a similar vein you need a return or yield in the block to get a result out which makes simple things like reduce look painful (but list comprehension saves filters and maps).
To see what I mean about the return or yield thing, here is an example. First the new form of reduce:def reduce( l, v, f ): for x in l: v = f( v, x ) return initial
Now, the clunky way to write a sum:sum = reduce( x, 0 ): (a,b) return a+b
Vs, the more elegant (IMHO):sum = reduce( x, 0 ): (a,b) a+b
However, sum is a simple contrived example because of course sums, products, any, and all should be built in, and possibly the only real uses involve suites large enough that return or yield is help in understanding, instead of a hindrance.
<pre>
interface IFoo(Bar):
a = 1
</pre>
Consider this declaration means:
<pre>
def tmp():
a = 1
IFoo = interface('IFoo', (Bar,), tmp)
</pre>
With this we can express user defined function definition. Interface or class declaration can be emulated by executing tmp and using its local scope as definitions list.
"I actually like ? and ! in function names, as in Scheme. But I don't think it is worth adding."
Scheme? I thought Ruby invented that! ;)
In any case, I hear you on the whitespace with one exception: I wonder if comments should be exempt from that rule. Sometimes it's just easier to scan down through a file when the important comments are left-aligned.
Sometimes it's just easier to scan down through a file when the important comments are left-aligned.
Can't you do that right now? I just tried it and it seems to work fine.
Surely you meant to write "py-in-the-sky"?
I think making self.something spellable as .something would be nice.
Not a big fan of that myself. I guess I usually read code in my head on some level, and .something isn't readable in the same way self.something is. Also, that dot is really small. As a separator that's size fine -- good even -- but if it is an independent modifier that's bothersome. Plus I'm not so much looking to save a few keystrokes by taking it out of the signature, as much as bringing some symmetry in signature and use.
. "
I'm guessing yu'll be using = for equality?
I.
I like the way you're handling self and cls. One thing to note here is that any method that contains a self. is an instance method, any method that contains a cls. is a class method if it doesn't contain a self. as well. If it contains neither, it's a static method. Both the classmethod and staticmethod functions go away.
Another minor point here is that allowing both self. and cls. in the same method gives you access to the class object without any special syntax.
I don't know... I'd consider that much too magical and opaque. »Explicit is better than implicit«, right? | http://www.ianbicking.org/my-python-4k.html | CC-MAIN-2016-44 | refinedweb | 3,993 | 66.44 |
Converting CSV to HTML table in Python
In this post, we are going to see how to convert a CSV file to an HTML table in Python. Here, we will discuss two methods that are available in Python.
2 Methods:
- Using pandas.
- Using PrettyTable.
CSV file:
- Expansion: Comma Separated Value file.
- To exchange data between applications, a CSV file can be used.
- It is a text file that has information that is separated by commas.
- Extension: .csv
Method 1: Using pandas
Among the 2 methods, the simplest one is using pandas. Pandas is very suitable to work with data that is in structural form. It is fast and provides expressive data structures. We are going to show you how we can use the Pandas library to convert a CSV into an HTML table.
Installation:
pip install pandas
Below is the CSV file,
“”
- First, we imported the pandas library.
- Then we read the CSV file using the read_csv() method.
- Syntax: pandas.read_csv(csv_file)
- After that, our CSV file is converted into HTML file using to_html() method.
- Syntax: file.to_html(filename)
Now, we have a look at the program.
import pandas file = pandas.read_csv("Student.csv") file.to_html("StudentTable.html")
After executing the above code, our HTML table will look like below,
“”
Method 2: Using PrettyTable
When there is a need to create quick and simple ASCII tables, PrettyTable library can be used.
Installation:
pip install PrettyTable
Let’s look into our program.
- We have imported the PrettyTable library initially.
- Then we opened the CSV file in reading mode using open() method.
- Syntax: open(filename,mode)
- After that, we read all the lines from the CSV files using readlines() method.
- Syntax: file.readlines()
- We assigned file[0] to the head variable. Because file[0] contains the headings present in the CSV file.
- Then we used the split() method which is used to separate the given string based on the separator given.
- Synatx: string.split(separator)
- We added rows to the table using the add_row() method.
- Syntax: table.add_row(data)
- Then, get_html_string() method is used to return the string representation of HTML table’s version.
- Syntax: table.get_html_string()
- Finally, we wrote the entire data into the final HTML file using the file.write() method
from prettytable import PrettyTable file = open("Student.csv", 'r') file = file.readlines() head = file[0] head = head.split(',') #for headings table = PrettyTable([head[0], head[1],head[2]]) for i in range(1, len(file)) : table.add_row(file[i].split(',')) htmlCode = table.get_html_string() final_htmlFile = open('StudentTable2.html', 'w') final_htmlFile=final_htmlFile.write(htmlCode)
After the execution of the code, our output will look like below.
“”
I hope that this tutorial has taught you something new and useful. | https://www.codespeedy.com/converting-csv-to-html-table-in-python/ | CC-MAIN-2020-45 | refinedweb | 447 | 68.26 |
Controlling Appearance
The visual aspects of RadSpell consist of the button or link that triggers the spell check and the dialog for the spell check. You can control the appearance by the following methods:
RadSpell descends from WebControl in the System.Web.UI.WebControls namespace and has standard properties for Width, Height, BackColor, BorderWith, BorderStyle, Font, ForeColor, etc. These properties apply to the button or link and can be modified at design or run-time.
Set the RadSpell CSSClass property to change the RadSpell button or link visual properties through styles.
Set the AllowAddCustom property to true (the default) to enable the "Add Custom" button in the spell check dialog.
Set the ButtonType property to show the spell check trigger as a PushButton, LinkButton, ImageButton or None. The ImageButton value renders a link with assigned .rscLinkImg class that can be used to modify the look of the rendered control. A sample implementation of custom CSS for RadSpell's ImageButton can be seen in the default demo.
Change the look and feel for the RadSpell dialog by selecting a predefined skin. | https://docs.telerik.com/devtools/aspnet-ajax/controls/spell/appearance-and-styling/controlling-appearance | CC-MAIN-2019-43 | refinedweb | 180 | 55.34 |
> On Thursday 04 August 2005 02:22 am, Andi Kleen wrote:> > On Thu, Aug 04, 2005 at 12:05:50AM -0700, James Cleverdon wrote:> > > diff -pruN 2.6.12.3/arch/i386/kernel/acpi/boot.c> > > n12.3/arch/i386/kernel/acpi/boot.c ---> > > 2.6.12.3/arch/i386/kernel/acpi/boot.c 2005-07-15 > 14:18:57.000000000> > > -0700 +++ n12.3/arch/i386/kernel/acpi/boot.c 2005-08-04> > > 00:01:10.199710211 -0700 @@ -42,6 +42,7 @@ static inline void > > > acpi_madt_oem_check(char *oem_id, char> > > *oem_table_id) { } extern void __init > clustered_apic_check(void); > > > static inline int ioapic_setup_disabled(void) { return 0; }> > > +extern int gsi_irq_sharing(int gsi);> > > #include <asm/proto.h>> > >> > > #else /* X86 */> > > @@ -51,6 +52,9 @@ static inline int > ioapic_setup_disabled( #include > > > <mach_mpparse.h>> > > #endif /* CONFIG_X86_LOCAL_APIC */> > >> > > +static inline int gsi_irq_sharing(int gsi) { return gsi; }> >> > Why is this different for i386/x86-64? It shouldn't.> > True. Have added code for i386. Unfortunately, I didn't see > one file that is shared by both architectures and which is > included when building with I/O APIC support. So, I > duplicated the function into io_apic.c> > > As a unrelated note we really need to get rid of this whole ifdef > > block.> >> > > +++ n12.3/arch/x86_64/Kconfig 2005-08-03 > 21:31:07.487451167 -0700> > > @@ -280,13 +280,13 @@ config HAVE_DEC_LOCK> > > default y> > >> > > config NR_CPUS> > > - int "Maximum number of CPUs (2-256)"> > > - range 2 256> > > + int "Maximum number of CPUs (2-255)"> > > + range 2 255> > > depends on SMP> > > - default "8"> > > + default "16"> >> > Don't change the default please.> >> > > .> > > > +> > > + retry_vector:> > > + vector = assign_irq_vector(gsi);> > > +> > > + /*> > > + * Sharing vectors means sharing IRQs, so scan irq_vectors for> > > previous + * use of vector and if found, return that IRQ. > > > However, we never want + * to share legacy IRQs, which usually> > > have a different trigger mode + * than PCI.> > > + */> >> > Can we perhaps force such sharing early temporarily even when the > > table is not filled up? This way we would get better test > coverage of > > all of this.> >> > That would be later disabled of course.> > Suppose I added a static counter and pretended that every > third non-legacy IRQ needed to be shared?> > > Rest looks ok to me.> >> > -Andi> > Sigh. Have to attach the file again. Sorry about that.> > Signed-off-by: James Cleverdon <jamesclv@us.ibm.com>I think you were going to change this line, which fixed the jumps in theirq distribution:--- io_apic.c 2005-08-11 10:14:33.564748923 -0700+++ io_apic.c.new 2005-08-11 10:15:55.412331115 -0700@@ -617,7 +617,7 @@ int gsi_irq_sharing(int gsi) * than PCI. */ for (i = 0; i < NR_IRQS; i++) - if (IO_APIC_VECTOR(i) == vector) { + if (IO_APIC_VECTOR(i) == vector && i != gsi) { if (!platform_legacy_irq(i)) break; /* got one */ IO_APIC_VECTOR(gsi) = 0;But it's not in this version of the patch.Thanks,--Natalie> --> James Cleverdon> IBM LTC | http://lkml.org/lkml/2005/8/15/4 | CC-MAIN-2014-15 | refinedweb | 460 | 76.93 |
Most often in real world applications we need to understand how one variable is determined by a number of others.
For example:
How does sales volume change with changes in price. How is this affected by changes in the weather?
How does the amount of a drug absorbed vary with dosage and with body weight of patient? Does it depend on blood pressure?
How are the conversions on an ecommerce website affected by two different page titles in an A/B comparison?
How does the energy released by an earthquake vary with the depth of it's epicenter?
How is the interest rate charged on a loan affected by credit history and by loan amount?
Answering questions like these, requires us to create a model.
A model is a formula where one variable (response) varies depending on one or more independent variables (covariates). For the loan example, interest rate might depend on FICO score, state, loan amount, and loan duration amongst others.
One of the simplest models we can create is a Linear Model where we start with the assumption that the dependent variable varies linearly with the independent variable(s).
While this may appear simplistic, many real world problems can be modeled usefully in this way. Often data that don't appear to have a linear relationship can be transformed using simple mappings so that they do now show a linear relationship. This is very powerful and Linear Models, therefore, have wide applicability.
They are one of the foundational tools of Data Science.
Creating a Linear Model involves a technique known as Linear Regression. It's a tool you've most probably already used without knowing that's what it was called.
Remember a typical physics lab experiment from high school? We had some input X (say force) which gave some output Y (say acceleration).
You made a number of pairs of observations x, y and plotted them on graph paper.
Then you had to fit a straight line through the set of observations using a visual "best fit".
And then you read off 'm' the slope, and 'b', the y-intercept from the graph, hoping it was close to the expected answer. By drawing the "best fit" line you were, in effect, visually estimating m and b without knowing it.
You were doing informal Linear Regression. We're going to do this a little more formally. And then make it more sophisticated.
Let's start with the basics.
Remember the equation for a straight line from high school?
$$Y = mX + b$$
where $m$ is the slope and $b$ is the y-intercept.
Very briefly and simplistically, Linear Regression is a class of techniques for
Fitting a straight line to a set of data points.
This could also be considered reverse engineering a formula from the data.
We'll develop this idea starting from first principles and adding mathematical sophistication as we go along. But before that, you're probably curious what were the 'm' and 'b' values for this graph. We use modeling software to generate this for us and we get:
We see two numbers, "Intercept" and "Slope". Independent of what software we use to do our linear regression for us, it will report these two numbers in one form or another. The "Intercept" here is the "b" in our equation. And the "Slope" is the slope of Y with respect to the independent variable.
To summarize, we have a dataset (the observations) and a model (our guess for a formula that fits the data) and we have to figure out the parameters of the model (the coefficients m and b in our best fit line) so that the model fits the data the "best". We want to use our data to find coefficients for a formula so that the formula will fit the data the "best".
As we continue, we'll actually run the modeling software and generate these numbers from real data. Here we just saw pictures of the results.
Once you had your visual best fit line and had read off the m and b you probably said something to the effect:
"The data follows a linear equation of the form Y = mX + b where m (slope)=(somenumber) and b (y intercept)=(someothernumber)"
You may recall that the equation is not an exact representation because most probably your data points are not all in a perfectly straight line. So there is some error varying from one data point to the next data point. Your visual approach subjectively tried to minimize some intuitive "total error" over all the data.
What you did was intuitive "Linear Regression". You estimated m and b by the "looks right to me" algorithm. We will start with this intuitive notion and rapidly bring some heavy machinery to bear that will allow us to solve pretty sophisticated problems.
At this point your lab exercise may well ask you to approximate what Y will be when X is some number outside the range of your measurements. Then you use the equation above where m and b are now actual numbers say 2.1 and 0.3 respectively i.e the equation is Y = 2.1X + 0.3
This equation is your "model"
And you plug in an X to get a Y.
This is where you are using your model to predict a value or, in other words, you are saying that I didn't use this value of X in my experiment and I don't have it in my data but I'd like to know what this value of X will map to on the Y axis.
Based on my model Y = 2.1X + 0.3 if I had used this value in my experiment then I believe I would have got an output Y of approximately what the straight line suggests.
You also want to be able to say "my error is expected to be (some number), so I believe the actual value will lie between Y-error and Y+error".
When used like this we call the X variable the "predictor" as values of Y are predicted based one values of X, and the Y variable the "response".
But before we do that let's take another trip back to the physics lab and peek over at the neighboring lab group's plots. We might see a different plot. So which one is "correct"?
Actually the graphs above were plotted by software that generated some points with random variation and then plotted a line through them.
What the software did was compute a function called a "loss function", a measure of error. Then, it "tried out" multiple straight lines until it found one that minimized the "loss function" value for that choice -- then it read off the Intercept and X-slope for that line.
Because this error estimation is an important part of our modeling we're going to take a more detailed look at it.
We want to create a simple formula for the error or difference between the value of Y given by our straight line, and the actual value of Y from our data set. Unless our line happens to pass through a particular point, this error will be non-zero. It may be positive or negative. We take the square of this error (we can do other things like take the abs value, but here we take the square.....patience, all will be revealed) and then we add up such error terms for each data point to get the total error for this straight line and this data set.
Important: for a different set of samples of the very same experiment we will get a different data set and possibly a different staright line and so almost certainly a different total error.
The squared error we used is a very commonly used form of the total error previously know as "quadratic error". It also has the property that errors in the negative direction and positive direction are treated the same and this "quadratic error" or "square error" is always have a positive value.
So for now we will use the "squared error" as our representation of error. [1]
So Regression in general is any approach we might use to estimate the coefficients of a model using the data to estimate the coefficients by minimizing the "squared error". Statistical software uses sophisticated numerical techniques using multivariable calculus to minimize this error and give us estimated values for the coefficients.
Let's try this on some real data.
We're going to look at a data set of Loan data from Lending Club, a peer lending web site. They have anonymized data on borrowers and loans that have been made. Loan data has many attributes and we'll explore the whole data set in a bit but for now we'll just look at how borrower FICO score affects interest rate charged.
%pylab inline import pandas as pd # we have to clean up the raw data set which we will do # in the next lesson. But for now let's look at the cleaned up data. # import the cleaned up dataset into a pandas data frame df = pd.read_csv('../datasets/loanf.csv') # extract FICO Score and Interest Rate and plot them # FICO Score on x-axis, Interest Rate on y-axis intrate = df['Interest.Rate'] fico = df['FICO.Score'] p = plot(fico,intrate,'o') ax = gca() xt = ax.set_xlabel('FICO Score') yt = ax.set_ylabel('Interest Rate %')
Populating the interactive namespace from numpy and matplotlib
Here we see a distinct downward linear trend where Interest Rate goes down with increasing FICO score. But we also see that for the same FICO score there is a range of Interest rates. This suggests that FICO by itself might not be enough to predict Interest Rate.
So the natural question that arises is what happens if Y depends on more than one variable. And this is where the power of mathematical generalization comes in. The same principle applies but in multiple dimensions. Not just two or three but much larger numbers. Twenty, thirty or even hundred independent variables are not out of question if we want to model real world data.
But for now let's look at $Y$ as a function of two independent variables, $X_1$ and $X_2$, so
$$ Y = a_0 + a_1X_1 + a_2X_2 $$
Here $a_0$ is the Intercept term and $a_1, a_2$ are the coefficients of $X_1, X_2$, the independent variables respectively.
So to look at a real data set with potentially multiple independent variables we're going to use the Lending Club data set in the next step.
from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling() | http://nbviewer.jupyter.org/github/nborwankar/LearnDataScience/blob/master/notebooks/A1.%20Linear%20Regression%20-%20Overview.ipynb | CC-MAIN-2018-26 | refinedweb | 1,790 | 62.38 |
Scattered comments. I am following this with interest as the design will
impact how smoothly GUIs like NetBeans are able to let users drop in new
tasklibs and start using them (or build them, for that matter).
Jose Alberto Fernandez wrote:
> >
> > I'm not a class loader expert and can only hope that breaking JDK 1.1
> > compatibility will make the whole class loader business a bit
> > easier. At first glance, they should probably be separate.
> >
>
> I think they should follow the current rules of <taskdef> with respect to
> classloaders.
> What happens otherwise if I define tasks by extending the classes of other
> tasks defined by other people in other task libraries?
Maybe you oughtn't do that to begin with. How do you know they really meant
their tasks to be subclassed? (How do you know their task library is even
available? or the same version you were expecting?) I would consider the
current behavior of <taskdef> questionable--if you mean to share classes, you
should declare it, and ideally the author of the original class should have
permitted it. Wasn't there some proposal for referencing classloader IDs to
explicitly permit reuse of classes? Or you could explicitly list the name of
the other task library as a code dependency.
(By the way consider making loading of the task classes lazy, this would
substantially reduce Ant's startup time I think.)
Stefan:
> > * Will each task library be assigned to an XML-Namespace to avoid
> > task-name clashes?
Agree with Wolf that the library should define the namespace, and with Stefan
that use of Class-Path is most standard for referencing extra libraries (JDK
handles it).
Also to Wolf--perhaps tasks should be loadable *only* from task files (not
unpacked)? This is not without precedent, people don't complain that their
form builder doesn't like unpacked JavaBeans. If you want to use
META-INF/antlib.xml or similar, it seems very odd to not have the stuff in a
JAR. Making a JAR is trivial anyway.
> I propose that <tasklib> itself does not require a specific extention name.
Seconded, lots of tools expect *.zip/*.jar and would get confused. On the
other hand there is some precedent with *.war and so on. Prefer *.ant to *.tsk
(you may want to include data types, ...).
> <taskdef....>
> <description href="">
> Here goes the short description for ANT help.
> </description>
> </taskdef>
Or location within the JAR file? An idea--NetBeans modules permit links to
internal JavaHelp-format docs ("helpsets"), which has seemed to work well, and
might be useful for tasklibs the same way. Even if you don't have JavaHelp
available, its navigation files are pretty straightforward to parse manually
(simple XML DTD) and display in other ways as needed. And you could optionally
provide help IDs for individual tasks, which could be used in a GUI
environment. If you don't care about navigation, and want to do it all in
HTML, you just give a *.hs with nothing but a <homeID> and a reference to one
*.map with a single mapping entry to your HTML index page.
-Jesse
--
Jesse Glick <mailto:Jesse.Glick@netbeans.com>
NetBeans, Open APIs <>
tel (+4202) 3300-9161 Sun Micro x49161 Praha CR | http://mail-archives.apache.org/mod_mbox/ant-dev/200105.mbox/%3C3B0A7CE2.17DE8465@netbeans.com%3E | CC-MAIN-2016-26 | refinedweb | 537 | 74.39 |
File-scoped namespace declaration
In C# 9 Top Level Statement feature was introduced. It allows in the app's entry point file not to write a boilerplate code.
Instead of the following:
namespace HelloWorld { class Program { static void Main(string[] args) { Console.WriteLine("Hello World!"); } } }
It's enough only one line of code vs ten.
Console.WriteLine("Hello World!");
And it was amazing but having such ability only in one place does not bring much profit especially in a big project. So, C# 10 goes further and it brings a feature with the name File-scoped namespace declaration.
Let's consider an example. We have a project. It contains two classes. We see that the class is declared inside the namespace. It is inside the curly braces
namespace _1._File_Scoped_Namespace { public class Author { public string Name { get; set; } public string LastName { get; set; } public Author(string name, string lastname) => (Name, LastName) = (name, lastname); } }
File-scoped namespace declaration allows to get rid of the curly braces and make the code even more elegant. After the namespace, we need to put the semicolon.
namespace _1._File_Scoped_Namespace; public class Author { public string Name { get; set; } public string LastName { get; set; } public Author(string name, string lastname) => (Name, LastName) = (name, lastname); }
Extended property patterns
A new feature which is called Extended property patterns is highly useful for the conditions with nested properties.
Let's see in action. We have a
Book class. And we want to implement a method that will tell us if the book has a discount.
public static bool doesHaveDiscount(Book book) { return false; }
Let's agree if the author's last name matches the specific value then we have a discount, otherwise, we don't.
public static bool doesHaveDiscount(Book book) => book switch { { Author: { LastName: "Richter" } } or { Author: { LastName: "Price" } } => true, _ => false };
We see nested properties of the author's object. Now we can simplify this construction by accessing the property with the dot.
public static bool doesHaveDiscount(Book book) => book switch { { Author.LastName: "Richter" } or { Author.LastName: "Price" } => true, _ => false };
It makes the code more readable and cleaner.
Constant interpolated strings
Strings iInterpolation is very cool because we can put the object right inside the string without leaving its borders.
public string ThankYouMessage() { return string.Empty; }
Let declare two variables. And make an interpolation for the second one.
var message = "Thank you for buying the book!"; var thankYouMessage = $"{message} Enjoy";
In C# 10 we can declare those strings as the constants because it is obvious that the values are not intended to change.
public string ThankYouMessage() { const string message = "Thank you for buying the book!"; const string thankYouMessage = $"{message} Enjoy"; return thankYouMessage; }
Records
One of the features of C# 9 was the record types. Long story short they allow to make classes immutable. C# 10 goes further and now we can create record structures. Let's consider the immutable structure
public readonly record struct Point { public int X { get; } public int Y { get; } }
After the name of the structure, we can pass the parameters.
public readonly record struct Point(int X, int Y);
and get rid of the rest of the code.
Also, in C# 10 we have the ability to add the
sealed keyword while overriding the
ToString() method in a record type. It tells the compiler: "don't create
ToString() for all inherited types, use this one".
record Circle { public sealed override string ToString() { return typeof(Circle).Name; } }
Assignment and declaration in the same deconstruction
The next feature is related to deconstruction. In previous versions, we could assign the values while deconstruction, or we could initialize them first and then assign them.
internal class Author { public string Name { get; set; } public string LastName { get; set; } public Author(string name, string lastname) => (Name, LastName) = (name, lastname); public void Deconstruct(out string name, out string lastname) => (name, lastname) = (Name, LastName); public override string ToString() => $"{Name} {LastName}"; }
var author = new Author("Andrei", "Fedotov"); Console.WriteLine(author); (string name1, string lastName) = author; Console.WriteLine(name1); Console.WriteLine(lastName); var name2 = string.Empty; var lastname2 = string.Empty; (name2, lastname2) = author; Console.WriteLine(name2); Console.WriteLine(lastname2);
C# 10.0 removes that restriction.
var lastname3 = string.Empty; (var name3, lastname3) = author; Console.WriteLine(name3); Console.WriteLine(lastname3);
Global using directives
One more feature is Global using directives.
If importing the same namespaces in each app's file is annoying you then this feature is for you. C# 1- allows mark the imports as global and they will be imported into the files automatically,
global using System.Collections.Generic; global using System.Linq;
namespace _6._Global_Using_directive; public class Store { private readonly IEnumerable<Book> Books; public Store(IEnumerable<Book> books) => Books = books; public IEnumerable<Book> GetBooks(Author author) => Books.Where(b => b.Author.LastName == author.LastName); }
Moreover, in the project configuration, there is a flag to enable implicit usings.
<ImplicitUsings>enable</ImplicitUsings>
Improvements of structure types
Since C# 10 we can declare the parameterless constructor for the structures.
public struct Point { public Point() { X = double.NaN; Y = double.NaN; } public Point(double x, double y) => (X, Y) = (x, y); public double X { get; set; } public double Y { get; set; } public override string ToString() => $"X: {X}, Y: {Y}"; }
var point = new Point(1,2); Console.WriteLine(point); // X: 1, Y: 2 var point2 = new Point(); Console.WriteLine(point2); // X: NaN, Y: NaN var point3 = default(Point); Console.WriteLine(point3); // X: 0, Y: 0
default ignores the parameterless constructor and gives the default value for a type.
The repo with the examples is here.
Thank you for reading the post, I hope you enjoyed it.
You also might want to check C# 9 list of features.
Cheers!
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/andreisfedotov/whats-new-in-c-10-new-features-of-c-10-15lo | CC-MAIN-2022-05 | refinedweb | 944 | 58.18 |
bool fclose(int handle)
Closes the file referenced by handle;
returns true if successful and
false if not.
int feof(int handle)
Returns true if the marker for the file referenced
by handle is at the end of the file (EOF)
or if an error occurs. If the marker is not at EOF, returns
false.
int fflush(int handle)
Commits any changes to the file referenced by
handle to disk, ensuring that the file
contents are on disk and not just in a disk buffer. If the operation
succeeds, the function returns true; otherwise it
returns false.
string fgetc(int handle)
Returns the character at the marker for the file referenced by
handle and moves the marker to the next
character. If the marker is at the end of the file, the function
returns false.
array fgetcsv(int handle, int length[, string delimiter])
Reads the next line from the file referenced by
handle and parses the line as a
comma-separated values (CSV) line. The longest line to read is given
by length. If supplied,
delimiter is used to delimit the values
for the line instead of commas. For example, to read and display all
lines from a file containing tab-separated values, use:
$fp = fopen("somefile.tab", "r");
while($line = fgetcsv($fp, 1024, "\t")) {
print "<p>" . count($line) . "fields:</p>";
print_r($line);
}
fclose($fp);
string fgets(int handle, int length)
Reads a string from the file referenced by
handle; a string of no more than
length characters is returned, but the
read ends at length - 1 (for the
end-of-line character) characters, at an end-of-line character, or at
EOF. Returns false if any error occurs.
string fgetss(int handle, int length[, string tags])
Reads a string from the file referenced by
handle; a string of no more than
length characters is returned, but the
read ends at length-1 (for the end-of-line
character) characters, at an end-of-line character, or at EOF. Any
PHP and HTML tags in the string, except those listed in
tags, are stripped before returning it.
Returns false if any error occurs.
array file(string path[, int include])
Reads the file at path and returns an
array of lines from the file. The strings include the end-of-line
characters. If include is specified and is
true, the include path
is searched for the file.
bool file_exists(string path)
Returns true if the file at
path exists and false
if not.
int fileatime(string path)
Returns the last access time, as a Unix timestamp value, for the file
path. Because of the cost involved in
retrieving this information from the filesystem, this information is
cached; you can clear the cache with clearstatcache(
).
int filectime(string path)
Returns the creation date, as a Unix timestamp value, for the file
path. Because of the cost involved in
retrieving this information from the filesystem, this information is
cached; you can clear the cache with clearstatcache(
).
int filegroup(string path)
Returns the group ID of the group owning the file
path. Because of the cost involved in
retrieving this information from the filesystem, this information is
cached; you can clear the cache with clearstatcache(
).
int fileinode(string path)
Returns the inode number of the file path,
or false if an error occurs. This information is
cached; see clearstatcache.
int filemtime(string path)
Returns the last-modified time, as a Unix timestamp value, for the
file path. This information is cached; you
can clear the cache with clearstatcache( ).
int fileowner(string path)
Returns the user ID of the owner of the file
path, or false if an
error occurs. This information is cached; you can clear the cache
with clearstatcache( ).
int fileperms(string path)
Returns the file permissions for the file
path; returns false if
any error occurs. This information is cached; you can clear the cache
with clearstatcache( ).
int filesize(string path)
Returns the size, in bytes, of the file
path. If the file does not exist, or any
other error occurs, the function returns false.
This information is cached; you can clear the cache with
clearstatcache( ).
string filetype(string path)
Returns the type of file given in path.
The possible types are:
fifo
The file is a fifo pipe.
char
The file is a text file.
dir
path is a directory.
block
A block reserved for use by the filesystem.
link
The file is a symbolic link.
file
The file contains binary data.
unknown
The file's type could not be determined.
bool flock(int handle, int operation[, int would_block])
Attempts to lock the file path of the file specified by
handle. The operation is one of the
following values:
LOCK_SH
Shared lock (reader)
LOCK_EX
Exclusive lock (writer)
LOCK_UN
Release a lock (either shared or exclusive)
LOCK_NB
Add to LOCK_SH or LOCK_EX to
obtain a non-blocking lock.
double floor(double number)
Returns the largest integer value less than or equal to
number.
void flush( )
Sends the current output buffer to the client and empties the output
buffer. See Chapter 13 for more information on
using the output buffer.
int fopen(string path, string mode[, bool include])
Opens the file specified by path and
returns a file resource handle to the open file. If
path begins with
http://, an HTTP connection is opened and a file
pointer to the start of the response is returned. If
path begins with
ftp://, an FTP connection is opened and a file
pointer to the start of the file is returned; the remote server must
support passive FTP.
If path is php://stdin,
php://stdout, or php://stderr,
a file pointer to the appropriate stream is returned.
The parameter mode specifies the
permissions to open the file with. It must be one of the following:
r
Open the file for reading; file pointer will be at beginning of file.
r+
Open the file for reading and writing; file pointer will be at
beginning of file.
w
Open the file for writing. If the file exists, it will be truncated
to zero length; if the file doesn't already exist,
it will be created.
w+
Open the file for reading and writing. If the file exists, it will be
truncated to zero length; if the file doesn't
already exist, it will be created. The file pointer starts at the
beginning of the file.
a
Open the file for writing. If the file exists, the file pointer will
be at the end of the file; if the file does not exist, it is created.
a+
Open the file for reading and writing. If the file exists, the file
pointer will be at the end of the file; if the file does not exist,
it is created.
If include is specified and is
true, fopen( ) tries to locate
the file in the current include path.
If any error occurs while attempting to open the file,
false is returned.
int fpassthru(int handle)
Outputs the file pointed to by handle and
closes the file. The file is output from the current file pointer
location to EOF. If any error occurs, false is
returned; if the operation is successful, true is
returned.
bool fputs(int handle, string string[, int length])
This function is an alias for fwrite( ).
string fread(int handle, int length)
Reads length bytes from the file
referenced by handle and returns them as a
string. If fewer than length bytes are
available before EOF is reached, the bytes up to EOF are returned.
mixed fscanf(int handle, string format[, string name1[, ... string nameN]])
Reads data from the file referenced by
handle and returns a value from it based
on format. For more information on how to
use this function, see sscanf.
If the optional name1 through
nameN parameters are not given, the values
scanned from the file are returned as an array; otherwise, they are
put into the variables named by name1
through nameN.
int fseek(int handle, int offset[, int from])
Moves the file pointer in handle to the
byte offset. If
from is specified, it determines how to
move the file pointer. from must be one of
the following values:
SEEK_SET
Sets the file pointer to the byte offset
(the default)
SEEK_CUR
Sets the file pointer to the current location plus
offset bytes
SEEK_END
Sets the file pointer to EOF minus offset
bytes
This function returns 0 if the function was
successful and -1 if the operation failed.
int fsockopen(string host, int port[, int error[, string message[, double timeout]]])
Opens a TCP or UDP connection
to a remote host on a specific
port. By default, TCP is used; to connect
via UDP, host must begin with the protocol
udp://. If specified,
timeout indicates the length of time in
seconds to wait before timing out.
If the connection is successful, a virtual file pointer is returned,
which can be used with functions such as fgets( )
and fputs( ). If the connection fails,
false is returned. If
error and
message are supplied, they are set to the
error number and error string, respectively.
array fstat(int handle)
Returns an associative array of information about the file referenced
by handle. The following values(given here
with their numeric and key indexes) are included in the array:
dev (0)
The device on which the file resides
ino (1)
The file's inode
mode (2)
The mode with which the file was opened
nlink (3)
The number of links to this file
uid (4)
The user ID of the file's owner
gid (5)
The group ID of the file's owner
rdev (6)
The device type (if the file is on an inode device)
size (7)
The file's size (in bytes)
atime (8)
The time of last access (in Unix timestamp format)
mtime (9)
The time of last modification (in Unix timestamp format)
ctime (10)
The time the file was created (in Unix timestamp format)
blksize (11)
The blocksize (in bytes) for the filesystem
blocks (12)
The number of blocks allocated to the file
int ftell(int handle)
Returns the byte offset to which the file referenced by
handle is set. If an error occurs, returns
false.
int ftruncate(int handle, int length)
Truncates the file referenced by handle to
length bytes. Returns
true if the operation is successful and
false if not.
mixed func_get_arg(int index)
Returns the index element in the function
argument array. If called outside a function, or if
index is greater than the number of
arguments in the argument array, func_get_arg( )
generates a warning and returns false.
array func_get_args( )
Returns the array of arguments given to the function as an indexed
array. If called outside a function, func_get_args(
) returns false and generates a warning.
int func_num_args( )
Returns the number of arguments passed to the current user-defined
function. If called outside a function, func_num_args(
) returns false and generates a warning.
bool function_exists(string function)
Returns true if a function with
function has been defined, and
false otherwise. The comparison to check for a
matching function is case-insensitive.
int fwrite(int handle, string string[, int length])
Writes string to the file referenced by
handle. The file must be open with write
privileges. If length is given, only that
many bytes of the string will be written. Returns the number of bytes
written, or -1 on error.
string get_browser([string name])
Returns an object containing information about the
user's current browser, as found in
$HTTP_USER_AGENT, or the browser identified by the
user agent name. The information is
gleaned from the browscap.ini file. The version
of the browser and various capabilities of the browser, such as
whether or not the browser supports frames, cookies, and so on, are
returned in the object.
string get_cfg_var(string name)
Returns the value of the PHP configuration variable
name. If name
does not exist, get_cfg_var( ) returns
false. Only those configuration variables set in a
configuration file, as returned by cfg_file_path(
), are returned by this function—compile-time
settings and Apache configuration file variables are not returned.
string get_class(object object)
Returns the name of the class of which the given object is an
instance. The class name is returned as a lowercase string.
array get_class_methods(mixed class)
If the parameter is a string, returns an array containing the names
of each method defined for the specified class. If the parameter is
an object, this function returns the methods defined in the class of
which the object is an instance.
array get_class_vars(string class)
Returns an associative array of default properties for the given
class. For each property, an element with a key of the property name
and a value of the default value is added to the array. Properties
that do not have default values are not returned in the array.
string get_current_user( )
Returns the name of the user under whose privileges the current PHP
script is executing.
array get_declared_classes( )
Returns an array containing the name of each defined class. This
includes any classes defined in extensions currently loaded in PHP.
array get_defined_constants( )
Returns an associative array of all constants defined by extensions
and the define( ) function and their values.
array get_defined_functions( )
Returns an array containing the name of each defined function. The
returned array is an associative array with two keys,
internal and user. The value of
the first key is an array containing the names of all internal PHP
functions; the value of the second key is an array containing the
names of all user-defined functions.
array get_defined_vars( )
Returns an array of all defined environment, server, and user-defined
variables.
array get_extension_funcs(string name)
Returns an array of functions provided by the extension specified by
name.
array get_html_translation_table([int which[, int style]])
Returns the translation table used by either
htmlspecialchars(
) or htmlentities( ).
If which is
HTML_ENTITIES, the table used by
htmlentities( ) is returned; if
which is
HTML_SPECIALCHARS, the table used by
htmlspecialchars( ) is returned. Optionally, you
can specify which quotes style you want returned; the possible values
are the same as those in the translation functions:
ENT_COMPAT (default)
Converts double quotes, but not single quotes
ENT_NOQUOTES
Does not convert either double quotes or single quotes
ENT_QUOTES
Converts both single and double quotes
array get_included_files( )
Returns an array of the files included into the current script by
include( ), include_once( ),
require( ), and require_once(
).
array get_loaded_extensions( )
Returns an array containing the names of every extension compiled and
loaded into PHP.
bool get_magic_quotes_gpc( )
Returns the current value of the quotes state for
GET/POST/cookie operations. If
true, all single quotes (''),
double quotes (""), backslashes
(\), and NUL-bytes ("\0") are
automatically escaped and unescaped as they go from the server to the
client and back.
array get_meta_tags(string path[, int include])
Parses the file path and extracts any HTML
meta tags it locates. Returns an associative array, the keys of which
are name attributes for the meta tags, and the
values of which are the appropriate values for the tags. The keys are
in lowercase, regardless of the case of the original attributes. If
include is specified and
true, the function searches for
path in the include path.
array get_object_vars(object object)
Returns an associative array of the properties for the given object.
For each property, an element with a key of the property name and a
value of the current value is added to the array. Properties that do
not have current values are not returned in the array, even if they
are defined in the class.
string get_parent_class(mixed object)
Returns the name of the parent class for the given object. If the
object does not inherit from another class, returns an empty string.
array get_required_files( )
This function is an alias for get_included_files(
).
string get_resource_type(resource handle)
Returns a string representing the type of the specified resource
handle. If
handle is not a valid resource, the
function generates an error and returns false. The
kinds of resources available are dependent on the extensions loaded,
but include "file",
"mysql link", and so on.
string getcwd( )
Returns the path of the PHP process's current
working directory.
array getdate([int timestamp])
Returns an associative array containing values for various components
for the given timestamp time and date. If
no timestamp is given, the current date
and time is used. The array contains the following keys and values:
seconds
Seconds
minutes
Minutes
hours
Hours
mday
Day of the month
wday
Numeric day of the week (Sunday is
"0")
mon
Month
year
Year
yday
Day of the year
weekday
Name of the day of the week
("Sunday" through
"Saturday")
month
Name of the month ("January"
through "December")
string getenv(string name)
Returns the value of the environment variable
name. If name
does not exist, getenv( ) returns
false.
string gethostbyaddr(string address)
Returns the hostname of the machine with the IP address
address. If no such address can be found,
or if address doesn't
resolve to a hostname, address is
returned.
string gethostbyname(string host)
Returns the IP address for host. If no
such host exists, host is returned.
array gethostbynamel(string host)
Returns an array of IP addresses for host.
If no such host exists, returns false.
int getlastmod( )
Returns the Unix timestamp value for the last-modification date of
the file containing the current script. If an error occurs while
retrieving the information, returns false.
int getmxrr(string host, array hosts[, array weights])
Searches DNS for all Mail Exchanger (MX) records for
host. The results are put into the array
hosts. If given, the weights for each MX
record are put into weights. Returns
true if any records are found and
false if none are found.
int getmyinode( )
Returns the inode value of the file containing the current script. If
an error occurs, returns false.
int getmypid( )
Returns the process ID for the PHP process executing the current
script. When PHP runs as a server module, any number of scripts may
share the same process ID, so it is not necessarily a unique number.
int getprotobyname(string name)
Returns the protocol number associated with
name in
/etc/protocols.
string getprotobynumber(int protocol)
Returns the protocol name associated with
protocol in
/etc/protocols.
int getrandmax( )
Returns the largest value that can be returned by rand(
).
array getrusage([int who])
Returns an associative array of information describing the resources
being used by the process running the current script. If
who is specified and is equal to 1,
information about the process's children is
returned. A list of the keys and descriptions of the values can be
found under the getrusage(2) Unix command.
int getservbyname(string service, string protocol)
Returns the port associated with service
in /etc/services.
protocol must be either TCP or UDP.
string getservbyport(int port, string protocol)
Returns the service name associated with
port and
protocol in
/etc/services.
protocol must be either TCP or UDP.
array gettimeofday( )
Returns an associative array containing information about the current
time, as obtained through gettimeofday(2).
The array contains the following keys and values:
sec
The current number of seconds since the Unix epoch.
msec
The current number of microseconds to add to the number of seconds.
minuteswest
The number of minutes west of Greenwich the current time zone is.
dsttime
The type of Daylight Savings Time correction to apply (during the
appropriate time of year, a positive number if the time zone observes
Daylight Savings Time).
string gettype(mixed value)
Returns a string description of the type of
value. The possible values for
value are "boolean",
"integer", "double",
"string", "array",
"object", "resource",
"NULL", and "unknown type".
string gmdate(string format[, int timestamp])
Returns a formatted string for a timestamp date and time. Identical
to date( ), except that it always uses Greenwich
Mean Time (GMT), rather than the time zone specified on the local
machine.
int gmmktime(int hour, int minutes, int seconds, int month, int day, int year)
Returns a timestamp date and time value from the provided set of
values. Identical to mktime( ), except that the
values represent a GMT time and date, rather than one in the local
time zone.
string gmstrftime(string format[, int timestamp])
Formats a GMT timestamp. See strftime for more
information on how to use this function.
void header(string header[, bool replace])
Sends header as a raw HTTP header string;
must be called before any output is generated (including blank lines,
a common mistake). If the header is a Location header, PHP also
generates the appropriate REDIRECT status code. If
replace is specified and
false, the header does not replace a header of the
same name; otherwise, the header replaces any header of the same
name.
bool headers_sent( )
Returns true if the HTTP headers have already been
sent. If they have not yet been sent, the function returns
false.
string hebrev(string string[, int size])
Converts the logical Hebrew text string to
visual Hebrew text. If the second parameter is specified, each line
will contain no more than size characters;
the function attempts to avoid breaking words.
Performs the same function as hebrev( ), except
that in addition to converting string,
newlines are converted to <br>\n. If
specified, each line will contain no more than
size characters; the function attempts to
avoid breaking words.
bool highlight_file(string filename)
Prints a syntax-colored version of the PHP source file
filename using PHP's
built-in syntax highlighter. Returns true if
filename exists and is a PHP source file;
otherwise, returns false.
bool highlight_string(string source)
Prints a syntax-colored version of the string
source using PHP's
built-in syntax highlighter. Returns true if
successful; otherwise, returns false.
int hexdec(string hex)
Converts hex to its decimal value. Up to a
32-bit number, or 2,147,483,647 decimal (0x7FFFFFFF hexadecimal), can
be converted.
string htmlentities(string string[, int style)
Converts all characters in string that
have special meaning in HTML and returns the resulting string. All
entities defined in the HTML standard are converted. If supplied,
style determines the manner in which
quotes are translated. The possible values for
style are:
string htmlspecialchars(string string[, int style])
Converts characters in string that have
special meaning in HTML and returns the resulting string. A subset of
all HTML entities covering the most common characters is used to
perform the translation. If supplied,
style determines the manner in which
quotes are translated. The characters translated are:
Ampersand (&) becomes
&
Double quotes (") become
"
Single quote (') becomes
'
Less than sign (<) becomes
<
Greater than sign (>) becomes
>
The possible values for style are:
int ignore_user_abort([bool ignore])
Sets whether the client disconnecting from the script should stop
processing of the PHP script. If ignore is
true, the script will continue processing, even
after a client disconnect. Returns the current value; if
ignore is not given, the current value is
returned without a new value being set.
string implode(array strings, string separator)
Returns a string created by joining every element in
strings with
separator.
bool import_request_variables(string types[, string prefix])
Imports GET, POST, and cookie variables into the global scope. The
types parameter defines which variables
are imported, and in which order—the three types are
"g" or "G",
"p" or "P", and
"c" or "C". For example, to
import POST and cookie variables, with cookie variables overwriting
POST variables, types would be
"cp". If given, the variable names are prefixed
with prefix. If
prefix is not specified or is an empty
string, a notice-level error is sent due to the possible security
hazard.
bool in_array(mixed value, array array[, bool strict])
Returns true if the given value exists in the
array. If the third argument is provided and is
true, the function will return
true only if the element exists in the array and
has the same type as the provided value (that is,
"1.23" in the array will not match
1.23 as the argument). If the argument is not
found in the array, the function returns false.
string ini_alter(string variable, string value)
This function is an alias for ini_set( ).
string ini_get(string variable)
Returns the value for the configuration option
variable. If
variable does not exist, returns
false.
string ini_restore(string variable)
Restores the value for the configuration option
variable. This is done automatically when
a script completes execution for all configuration options set using
ini_set( ) during the script.
string ini_set(string variable, string value)
Sets the configuration option variable to
value. Returns the previous value if
successful or false if not. The new value is kept
for the duration of the current script and is restored after the
script ends.
int intval(mixed value[, int base])
Returns the integer value for value using
the optional base base (if unspecified,
base 10 is used). If value is a nonscalar
value (object or array), the function returns 0.
int ip2long(string address)
Converts a dotted (standard format) IP address to an IPv4 address.
array iptcparse(string data)
Parses the IPTC (International Press Telecommunications Council) data
block data into an array of individual
tags with the tag markers as keys. Returns false
if an error occurs or if no IPTC data is
found in data.
bool is_array(mixed value)
Returns true if value
is an array; otherwise, returns false.
bool is_bool(mixed value)
Returns true if value
is a Boolean; otherwise, returns false.
bool is_dir(string path)
Returns true if path
exists and is a directory; otherwise, returns
false. This information is cached; you can clear
the cache with clearstatcache( ).
bool is_double(mixed value)
Returns true if value
is a double; otherwise, returns false.
bool is_executable(string path)
Returns true if path
exists and is executable; otherwise, returns
false. This information is cached; you can clear
the cache with clearstatcache( ).
bool is_file(string path)
Returns true if path
exists and is a file; otherwise, returns false.
This information is cached; you can clear the cache with
clearstatcache( ).
bool is_float(mixed value)
This function is an alias for is_double( ).
bool is_int(mixed value)
This function is an alias for is_long( ).
bool is_integer(mixed value)
bool is_link(string path)
Returns true if path
exists and is a symbolic link file; otherwise, returns
false. This information is cached; you can clear
the cache with clearstatcache( ).
bool is_long(mixed value)
Returns true if value
is an integer; otherwise, returns false.
bool is_null(mixed value)
Returns true if value
is null—that is, is the keyword NULL;
otherwise, returns false.
bool is_numeric(mixed value)
Returns true if value
is an integer, a floating-point value, or a string containing a
number; otherwise, returns false.
bool is_object(mixed value)
Returns true if value
is an object; otherwise, returns false.
bool is_readable(string path)
Returns true if path
exists and is readable; otherwise, returns false.
This information is cached; you can clear the cache with
clearstatcache( ).
bool is_real(mixed value)
bool is_resource(mixed value)
Returns true if value
is a resource; otherwise, returns false.
bool is_scalar(mixed value)
Returns true if value
is a scalar value—an integer, Boolean, floating-point value,
resource, or string. If value is not a
scalar value, the function returns false.
bool is_string(mixed value)
Returns true if value
is a string; otherwise, returns false.
bool is_subclass_of(object object, string class)
Returns true if object
is an instance of the class class or is an
instance of a subclass of class. If not,
the function returns false.
bool is_uploaded_file(string path)
Returns true if path
exists and was uploaded to the web server using the
file element in a web page form; otherwise,
returns false. See Chapter 7
for more information on using uploaded files.
bool is_writable(string path)
bool is_writeable(string path)
This function is an alias for is_writable( ).
bool isset(mixed value)
Returns true if value,
a variable, has been set; if the variable has never been set, or has
been unset( ), the function returns
false. | https://docstore.mik.ua/orelly/webprog/php/appa_03.htm | CC-MAIN-2019-18 | refinedweb | 4,623 | 62.98 |
I'm familiar with old OpenGL but just getting up to speed on shaders. I was writing code in Processing and took a working demo that draws a cylinder and attempted to convert it to draw a sphere.
There are three components to the code. First, there is the processing code, a class called PShape that encapsulates an OpenGL shape. The code for a cylinder uses a single QUAD_STRIP. For a sphere, I didn't come up with one, though it occurs to me that's not a bad approach. So, question 1: Does anyone have code that maps a sphere, not as a grid of quads with a north pole and south pole cap, but some kind of gently curving path that completely covers the sphere? If not, what is the best way? I was originally intending to create bands of quads that cover the majority of the sphere except for the poles, and an endcap at each pole.
When I inadvertently had a single QUAD_STRIP with the ends not connected to each other, I could see the earth, with blue and white noise on top. I realized that my code was doing a band of latitudes, and that the end of each strip was not connected to the next strip. So I tried to create a number of strips. When I do, I get all white.
I realize that this may be something involving processing, but most of the expertise for this is in OpenGL, so I'm asking here about that aspect.
bad sphere:
With current code, no texture shows at all. I have left in the "can" code so you can see the code that worked creating a cylinder.
I will show first the shaders, then the Processing code that calls them. I have not changed the shaders at all.
Code :#define PROCESSING_TEXTURE_SHADER uniform mat4 transform; uniform mat4 texMatrix; attribute vec4 vertex; attribute vec4 color; attribute vec2 texCoord; varying vec4 vertColor; varying vec4 vertTexCoord; void main() { gl_Position = transform * vertex; vertColor = color; vertTexCoord = texMatrix * vec4(texCoord, 1.0, 1.0); }
Code :#ifdef GL_ES precision mediump float; precision mediump int; #endif uniform sampler2D texture; varying vec4 vertColor; varying vec4 vertTexCoord; void main() { gl_FragColor = texture2D(texture, vertTexCoord.st) * vertColor; }
Here is the code in processing that invokes the shaders
Code :PImage label; PShape can; float angle; PShader texShader; void setup() { size(1280, 800, P3D); label = loadImage("earth.jpg"); //can = createCan(100, 200, 32, label); can = createSphere(350, 64, label); texShader = loadShader("texfrag.glsl", "texvert.glsl"); } void draw() { background(0); shader(texShader); translate(width/2, height/2); rotateX(-PI/2); rotateZ(angle); shape(can); angle += 0.01; } PShape createCan(float r, float h, int detail, PImage tex) { textureMode(NORMAL); PShape sh = createShape(); sh.beginShape(QUAD_STRIP); sh.noStroke(); sh.texture(tex); for (int i = 0; i <= detail; i++) { float angle = TWO_PI / detail; float x = sin(i * angle); float z = cos(i * angle); float u = float(i) / detail; sh.normal(x, 0, z); sh.vertex(x * r, -h/2, z * r, u, 0); sh.vertex(x * r, +h/2, z * r, u, 1); } sh.endShape(); return sh; } PShape createSphere(float r, int detail, PImage tex) { textureMode(NORMAL); PShape sh = createShape(); sh.noStroke(); sh.texture(tex); final float dA = TWO_PI / detail; // change in angle // process the sphere one band at a time // going from almost south pole to almost north // poles must be handled separately float theta2 = -PI/2+dA; float SHIFT = PI/2; float z2 = sin(theta2); // height off equator float rxyUpper = cos(theta2); // closer to equator for (int i = 1; i < detail; i++) { float theta1 = theta2; theta2 = theta1 + dA; float z1 = z2; z2 = sin(theta2); float rxyLower = rxyUpper; rxyUpper = cos(theta2); // radius in xy plane sh.beginShape(QUAD_STRIP); for (int j = 0; j <= detail; j++) { float phi = j * dA; //longitude in radians float xLower = rxyLower * cos(phi); float yLower = rxyLower * sin(phi); float xUpper = rxyUpper * cos(phi); float yUpper = rxyUpper * sin(phi); float u = phi/TWO_PI; sh.normal(xUpper, yUpper, z2); sh.vertex(r*xUpper, r*yUpper, r*z2, u,(theta2+SHIFT)/PI); sh.normal(xLower, yLower, z1); sh.vertex(r*xLower, r*yLower, r*z1, u,(theta1+SHIFT)/PI); } sh.endShape(); } return sh; } | https://www.opengl.org/discussion_boards/showthread.php/183563-Trying-to-create-a-sphere-in-Processing | CC-MAIN-2017-51 | refinedweb | 693 | 62.78 |
cloud_firestore 0.13.2
cloud_firestore: ^0.13.2 copied to clipboard
Flutter plugin for Cloud Firestore, a cloud-hosted, noSQL database with live synchronization and offline support on Android and iOS.
Use this package as a library
Depend on it
Run this command:
With Flutter:
$ flutter pub pub add cloud_firestore
This will add a line like this to your package's pubspec.yaml (and run an implicit
dart pub get):
dependencies: cloud_firestore: ^0.13.2
Alternatively, your editor might support
flutter pub get.
Check the docs for your editor to learn more.
Import it
Now in your Dart code, you can use:
import 'package:cloud_firestore/cloud_firestore.dart'; | https://pub.dev/packages/cloud_firestore/versions/0.13.2/install | CC-MAIN-2021-17 | refinedweb | 107 | 57.67 |
[Tracking Requested - why for this release]: +++ This bug was initially created as a clone of Bug #1397369 +++ User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:57.0) Gecko/20100101 Firefox/57.0 Build ID: 20170905220108 Steps to reproduce: STR 1. Enable Bookmarks toolbar 2. Right click on the toolbar and Choose "New Bookmark/Folder..." and then click on [x] button 3. Try delete, rename, move a bookmark item Actual results: New Bookmarks/Folder are created at step2. No longer able to edit bookmark item at step3 until restart browser. Expected results: New Bookmarks/Folder should not be created at step2. Eble to edit bookmark item at step3. Regression window: Regressed by: 0a4690dfd7b3 Mark Banner — Bug 1071513 - Enable async PlacesTransactions for nightly builds. r=mak
I can reproduce this, and I think I know what it is - it looks similar to bug 1396888 - the dialog is going away whilst we're resolving the batch for the async transactions so it all goes out of scope and leaves the transaction manager in a bad state.
The patch here should fix it, but I'm on Mac and I need to test on Windows, so I'll push it to try and give it a run in the morning.
I tested the try build on Windows and it seems to work fine.
Comment on attachment 8905249 [details] Bug 1397387 - Move async actions out of the opening/closing cycles of the bookmarks dialog to ensure they finish. ::: browser/components/places/PlacesUIUtils.jsm:671 (Diff revision 2) > - return ("performed" in aInfo && aInfo.performed); > + > + let performed = ("performed" in aInfo && aInfo.performed); > + > + if (this.useAsyncTransactions) { > + batchBlockingDeferred.resolve(); > + batchBlockingDeferred = null; nit: this nullification should not be necessary
Pushed by mbanner@mozilla.com: Move async actions out of the opening/closing cycles of the bookmarks dialog to ensure they finish. r=mak
I have reproduced this bug with Nightly 57.0a1 (2017-09-05) on Windows 8.1, 64 Bit! This bug's fix is verified on Latest Nightly 57.0a1. Build ID : 20170910100150 User Agent : Mozilla/5.0 (Windows NT 6.3; WOW64; rv:57.0) Gecko/20100101 Firefox/57.0
Build ID: 20171016220427 User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:58.0) Gecko/20100101 Firefox/58.0 Verified as fixed on Firefox Nightly 58.0a1 on Windows 10 x 64, Windows 7 x32, Mac OS X 10.12 and Ubuntu 16.04 x64. | https://bugzilla.mozilla.org/show_bug.cgi?id=1397387 | CC-MAIN-2017-47 | refinedweb | 405 | 70.29 |
OpenCV ?false? memory leak in VisualStudio
Maybe this is not a true memory leak but it must be worrisome to some developers. I have seen similar posts to this around the web.
My most recent experience is with wxWidgets and OpenCV 4.0.0 The single line below that declares a simple Mat makes the difference between VS detecting a memory leak or not. I have tried several suggestions including delayed dll loading for OpenCV dlls, freeing up a mutex in core/system.cpp but no luck so far.
Is this really a memory leak? Any suggestions??
// wxWidgets_OpenCV_LeakTest.cpp #include <wx/wx.h> #include <opencv2/core.hpp> #include <opencv2/highgui.hpp> using namespace cv; class MyApp : public wxApp { public: virtual bool OnInit() { Mat m(Size(32, 32), CV_8UC1, Scalar(200)); wxFrame *frame = new wxFrame(NULL, wxID_ANY, "Hello World"); frame->Show(true); return true; } }; wxIMPLEMENT_APP(MyApp);
Detected memory leaks! Dumping objects -> {12828} normal block at 0x0000000000516580, 12 bytes long. Data: <opencvtrace> 4F 70 65 6E 43 56 54 72 61 63 65 00 {12827} normal block at 0x0000000000552890, 48 bytes long. Data: < eQ > 01 00 00 00 CD CD CD CD 80 65 51 00 00 00 00 00 {12826} normal block at 0x0000000000555E10, 256 bytes long. Data: < > CD CD CD CD CD CD CD CD CD CD CD CD CD CD CD CD | https://answers.opencv.org/question/210166/opencv-false-memory-leak-in-visualstudio/ | CC-MAIN-2021-25 | refinedweb | 223 | 74.49 |
Noel Rappin (noelrappin@gmail.com), Senior Software Engineer, Motorola, Inc.
19 Jun 2007
Using the Google Web Toolkit (GWT), a Java programmer can write rich Asynchronous JavaScript + XML (Ajax) applications completely
in the Java™ programming language. Cypal Studio for GWT, designed for the Eclipse IDE, provides support for managing GWT constructs.
Learn how Cypal Studio for GWT helps create new GWT modules, supports the creation of remote procedure calls, and makes it easy to view
and deploy your Web applications.
Cypal Studio and the GWT
GWT is a set of tools that allows a Java programmer to write dynamic Ajax Web applications completely within the Java programming
language, with no JavaScript required. A GWT application runs in all the major browsers, allows for rich interaction with the user, and can be
fully tested and debugged within your Java development environment.
The GWT framework has four major components. A collection of widgets, implemented in the Java language, provides all the standard user
interface (UI) functionality you would expect in a somewhat simpler application program interface (API) than, say, Swing. A remote procedure
mechanism allows for communication between client and server, with GWT handling all the pipe and data translation. A fully integrated browser
simulator allows GWT to run on its own during development, including niceties like being able to set breakpoints in your editor during a
GWT debugging session. Finally, a compiler converts your Java code into the cross-browser JavaScript code that is actually executed in the
client browser, managing browser incompatibilities so you don't have to.
While GWT simplifies the process by which you create an Ajax application, it still has several parts you must keep synchronized for it to work.
As of this writing, advanced tool support for GWT is beginning to emerge in the major Java development environments.
Cypal Studio for GWT is a plug-in for Eclipse that simplifies many of the common tasks performed during GWT
development. This article is not meant to be a full introduction to GWT. See Resources for in-depth documentation,
including how GWT works and how to create simple applications.
Install Cypal Studio within the Eclipse Web Tools Platform
Before your can work with Cypal Studio for GWT, you must download it (see Resources). As of this writing, the
current GWT version is 1.3, and it's available for Microsoft® Windows®, Mac OS X, and Linux®. Cypal may not support the
GWT V1.4 release candidate that was available as this was written. Simply download
the file for your operating system, extract it, and place the resulting folder someplace handy.
Next, you need a version of Eclipse with the Web Tools Platform (WTP) plug-ins. WTP is an omnibus collection of tools supporting Web
application development. It includes editor support for Web standards, such as HTML and Cascading Style Sheets (CSS), JavaServer Pages
(JSP) editor support, support for creating and maintaining the database you use in your Web application, and running the application on a Web
server during development.
Those features are all very nice, but they're somewhat outside the scope of this article. At the moment, we are interested in WTP because
Cypal Studio for GWT requires it to run; see Resources for further information about WTP.
The easiest way to get an Eclipse system that has WTP enabled is to download the whole thing in one shot. This is especially recommended
if you are downloading Eclipse for the first time. The WTP download page offers an all-in-one download for all the WTP plug-ins, as well as a
handful of prerequisite plug-ins. The page is a little on the confusing side: Look for Web Tools Platform;
All-in-one. As of this writing, the current WTP version is 1.5.4. There are versions for Windows, Linux, and Mac OS X; download the one
appropriate to your platform.
If downloading the whole thing at once strikes you as too straightforward or — more likely — you already have
Eclipse and you don't want to download the whole thing all over again, you can download WTP as a plug-in. The download page lists a few
requirement plug-ins. Download those, extract them, and place them in the plugins directory of your Eclipse installation. Then download
Web Tools Platform (WTP, JST, and WST combined), with a file name something like wtp-R-1.5.4.zip. Extract that file to your plugins
directory, as well.
Having done all that, you're finally ready to get the latest version of Cypal Studio
for GWT. As of this writing, the current version appears with the name cypal.studio.for.gwt-beta.zip. Extracting that file to your Eclipse
directory places files in the features and plugins directories.
Note: If you had installed the old Googlipse plug-in, you may need to remove that plug-in for the Cypal Studio for GWT plug-in to
install cleanly.
Now that everything is downloaded, there's still one configuration option you must set before you can start. Fire up Eclipse and access the
Preferences window, as shown in Figure 1. If everything has gone well, Cypal Studio should have an entry on the left-hand side.
Simply set the GWT Home setting to the top-level directory of the GWT installation you created earlier.
That should install everything you need. Let's get going.
Create your Cypal Studio project
To use Cypal Studio for GWT, you must create a new Dynamic Web Project or add Cypal Studio to an existing project. To start a new project,
choose File > New. Choose the Dynamic Web Project Wizard, which you can find under the Web heading. You'll see the
window shown below.
To create your project:
Cypal Studio for GWT starts creating a bunch of files, and you may be prompted to accept a license agreement for a document type definition (DTD) or two. When all is said and done, your Eclipse Project Explorer should look like Figure 3.
The Deployment Descriptor is a WTP thing, and I won't spend a lot of time there. Cypal Studio for GWT has created a directory for your source
code, a directory for compiled Java code (when you have some of that), and a directory for WebContent, which so far consists largely of a rather
lonely web.xml file.
At this point, you've created a new Cypal Studio project. Should you want to add Cypal Studio support to an existing Eclipse Dynamic
Web Project, right-click the project in your Project Explorer, then click Properties. From there, choose Project Facets >
Add/Remove Project Facets. You'll see a list of available facets; click Cypal's GWT Facet.
Add your first project module
You have a lovely project skeleton, but no actual code. Let's fix that. The basic unit of GWT code is the module. A module roughly
corresponds to a page users call up from their client browsers. A module generally consists of one or more
Entry Point classes, which are loaded when the module itself loads. In addition, you can specify nonstandard
locations for the module's source code, public Web files, and required JavaScript or CSS files, if any.
Entry Point
To create a module in Cypal Studio for GWT, choose File > New. If there's a listing for Module with the Cypal Studio
toolbox icon, choose that. If not, choose Other > Cypal Studio > Module from the window that appears. Eventually, you wind up with
the window shown below.
The Source folder, by default, is the source folder of your existing project. The Package folder, however, you must set for yourself. The normal
convention is Your Top Level Package.Name of Module. You must also set the name of the module, the Superclass, and the Interfaces
default, as shown.
After clicking Finish, Cypal Studio creates the following elements. (Those who have used GWT before will remember that this is
roughly what you get from GWT's command-line tools, only with a nice easy-to-use graphical user interface [GUI].)
Create and run your code
Initially, the Java and HTML files contain the minimal stub code needed to compile. However, you can put something just a bit more interesting
there: Insert the code shown in Listing 1 into the body section of the HTML file.
<table align=center>
<tr>
<td id="button"></td>
<td id="count"></td>
</tr>
</table>
Then put the Java code shown in Listing 2 in the Java file. You'll also need to have Eclipse generate the import statements for the imported
classes.
int count = 0;
public void onModuleLoad() {
final Button button = new Button("Count Your Clicks!");
final Label label = new Label(String.valueOf(count));
button.addClickListener(new ClickListener() {
public void onClick(Widget sender) {
count += 1;
label.setText(String.valueOf(count));
}
});
RootPanel.get("button").add(button);
RootPanel.get("count").add(label);
}
Because this code is in the onModuleLoad() method, it will run automatically when the module loads.
Without getting too deeply into the details of the GWT widget set, this code creates a button and a label. It adds a listener to the button,
then puts them both into HTML elements whose document object model (DOM) IDs match the strings passed to
RootPanel.get().
onModuleLoad()
RootPanel.get()
To run your system, click Run. From the Run window, choose Gwt Hosted Mode Application > New_configuration.
The window below appears. (You might have to right-click GWT Hosted Mode Application, then click New to see the new
configuration.)
You must specify which project you have in mind and which module of that project. Luckily, Eclipse makes it easy to browse the existing
project space. Click Apply to save your new configuration (give it a unique name first), then click Run to run it. Doing so invokes
GWT hosted mode for your project. (After doing this once, clicking Run will work just like any other Eclipse run target.) Those of you who
tried setting run targets up in Eclipse without Cypal Studio will appreciate the fact that Cypal Studio is about seven steps simpler.
When run, the page should look like Figure 6.
Connecting to a remote service
Communication to the remote server is the key to any Web application, and GWT provides a framework by which your client-side GWT code
can communicate with a Java remote server. The mechanism, more fully described in
"Build an Ajax application using Google Web Toolkit, Apache Derby, and
Eclipse, Part 3," is an Enterprise JavaBean (EJB) merger of multiple classes and interfaces. It's much easier than building each connection
from scratch, but it still has a lot of parts to keep track of.
Cypal Studio for GWT has a couple of handy features to make it even easier for you to create and manage a remote connection. Start the
process by choosing New > Remote Service. (If Remote Service is not in the menu, choose Other > Cypal Studio, just like
you did to create the module.) You see the window shown below.
You must fill in the Name, which is the actual name of your Java server-side class, and the Extended interfaces service URI,
which is the server-side URL your client will actually call. Then click Finish and let Cypal Studio do some work.
Cypal Studio creates three files for you. In your client package, it creates NumberGenerator.java, the main interface for this remote
connection, shown in Listing 3.
public interface NumberGenerator extends RemoteService {
public static final String SERVICE_URI = "/numbergenerator";
public static class Util {
public static NumberGeneratorAsync getInstance() {
NumberGeneratorAsync instance = (NumberGeneratorAsync) GWT
.create(NumberGenerator.class);
ServiceDefTarget target = (ServiceDefTarget) instance;
target.setServiceEntryPoint(GWT.getModuleBaseURL() + SERVICE_URI);
return instance;
}
}
}
Notice that this file is an empty interface at the moment, but Cypal Studio has created a utility object for returning a fully GWT-certified instance
suitable for your remote calls. "Build an Ajax application using Google Web
Toolkit, Apache Derby, and Eclipse, Part 3" contains similar code, along with the suggestion that it might be useful to create a common
method for the boilerplate code.
The return value of that utility is actually of type NumberGeneratorAsync, which is the asynchronous sibling of the
main interface. Every method of the main interface has a matching entry in the asynchronous interface, but with a return type of
void and an additional argument of class AsyncCallback. When you call this code from
your client page, you actually use the async interface, and GWT converts it to the main interface, which is what the server side will see. Then,
you use an AsyncCallback object to actually do something with the server response.
NumberGeneratorAsync
void
AsyncCallback
On the server side, Cypal Studio has created the implementation class NumberGeneratorImpl, which extends
the GWT class RemoteServiceServlet and implements the NumberGenerator interface.
In addition, GWT has modified the web.xml file to register the new remote server for use in deployed applications. The new lines look like Listing 4.
NumberGeneratorImpl
RemoteServiceServlet
NumberGenerator
<servlet>
<servlet-name>NumberGenerator</servlet-name>
<servlet-class>
com.ibm.firstmodule.server.NumberGeneratorImpl</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>NumberGenerator</servlet-name>
<url-pattern>numbergenerator</url-pattern>
</servlet-mapping>
To actually create a remote call, start by adding the method signature to NumberGenerator. The method you'll
implement will play the game "I'm thinking of a number." The method signature is:
public Integer getNumber(int maxNumber);
Save that signature in the NumberGenerator interface, and something interesting happens: Cypal Studio has
added the matching method to NumberGeneratorAsync:
public void getNumber(int maxNumber, AsyncCallback callback);
This is very handy because keeping these two interfaces in sync manually is a bit of a burden. Now, you must also go to your
NumberGeneratorImpl class. Eclipse flags this class in red because the
NumberGenerator interface is no longer fully implemented. Fortunately, you can fix that by adding the following
code, which — before you ask — I realize is on the simplistic side.
public Integer getNumber(int maxNumber) {
return new Integer((new Random()).nextInt());
}
To make the call, I've added it to the module, as shown in Listing 5. Note that to get this to work, I added another row to the HTML body with
two cells, the first with the ID sender and the second with the ID response. The module
now has a button that retrieves the random number from the server and compares it to your click count.
sender
response
public class FirstModule implements EntryPoint {
int count = 0;
private Button button;
private Button sender;
private Label label;
private Label response;
public void onModuleLoad() {
button = new Button("Count Your Clicks!");
sender = new Button("Send Your Count!");
label = new Label(String.valueOf(count));
response = new Label("No Guess Yet");
button.addClickListener(new CountButtonClickListener());
sender.addClickListener(new SendClickListener());
RootPanel.get("button").add(button);
RootPanel.get("count").add(label);
RootPanel.get("sender").add(sender);
RootPanel.get("response").add(response);
}
public class CountButtonClickListener implements ClickListener {
public void onClick(Widget sender) {
count += 1;
label.setText(String.valueOf(count));
}
}
public class SendClickListener implements ClickListener {
public void onClick(Widget sender) {
NumberGeneratorAsync async =
NumberGenerator.Util.getInstance();
async.getNumber(10, new NumberCallback());
}
}
public class NumberCallback implements AsyncCallback {
public void onFailure(Throwable error) {
response.setText("Oops");
}
public void onSuccess(Object resp) {
int intResp = ((Integer) resp).intValue();
if (intResp == count) {
response.setText("Got It!");
} else if (intResp < count) {
response.setText("Too Low");
} else if (intResp > count) {
response.setText("Too High");
}
}
}
}
The key parts of the code are the SendClickListener and the NumberCallback. In
SendClickListener, you use the Cypal Studio-generated Util class to get an instance
of your async interface and call the getNumber() method from that interface.
SendClickListener
NumberCallback
Util
getNumber()
The second argument to the call, which is an instance of NumberCallback, GWT invokes automatically when the
server completes its response. The callback has two branches: onFailure() and
onSuccess()
— depending on whether the server completed the request without an exception.
In this case, if the server succeeds, you compare the result to your count and set text in one of your labels. (You could have designed this so that
the call sends the current count value to the server and the comparison is done server-side; it's just a question of where you want the complexity.)
onFailure()
onSuccess()
When this code is in place, fire up GWT hosted mode, and everything should work.
Deploy your GWT application
One area in which Cypal Studio is perhaps not yet fully mature is in using your Web application with an external server. It's certainly possible to
do so, both inside and outside of Eclipse, but the process is probably a step or two longer than necessary.
To run your Web application inside Eclipse, you need an Eclipse WTP-approved servlet engine. Apache Tomcat is always a fine choice. Next,
compile your GWT application. The easiest way to do this is to run the application in hosted mode as shown above, then click
Compile/Browse. Doing so compiles all your GWT code to JavaScript files and probably opens an external browser on your machine,
which you can ignore.
Next, you can trigger the process of running on your external server by right-clicking the project name in the Project Explorer. Choose
Run As > Run On Server. A window to define a new server appears. Keep the host name as localhost. Select whatever server
type you plan to use. If you haven't previously set up a server of that type, you'll also be prompted for the runtime directory of the server.
After setting up the runtime directory, Eclipse opens an HTML page in your workspace. Don't panic if Eclipse tries to hit the root directory of
the system, which is an error in this case because you haven't specified anything like an index.html file. Simply point the browser at the HTML
file you've created (in this case,), and you'll see something like Figure 8.
If you want to deploy your GWT project to an external browser, you can do so easily by right-clicking the project in the Project Explorer and
choosing Export > WAR. You're prompted where to place the Web Archive (WAR) file. (You must also compile the GWT code as
described above.) You can then drop the WAR file in the appropriate place on the server of your choice, and you're all set.
The future of GWT and Cypal Studio
As of this writing, the exciting GWT news is the V1.3 release of a fully open source GWT. The V1.4 release is expected to add a rich text widget, splitters, and date and number formatters as well as improvements to the development tools and performance. Cypal Studio is still under active development, so visit the Cypal Studio for GWT Web site for new information.? | http://www.ibm.com/developerworks/opensource/library/os-eclipse-ajaxcypal/ | crawl-002 | refinedweb | 3,116 | 54.83 |
Issue
I am creating an API using Flask-RESTful, and am attempting to make a resource that returns all items in a database that are like a user defined string.
The endpoint for this resource is defined as:
api = Api(app) api.add_resource(ItemsList, '/privateapi/item?query=<string:tag>')
And the resource itself is:
class ItemsList(Resource): def get(self, tag): search = f"%{tag}%" items = ItemModel.query.filter(ItemModel.item_name.like(search)).all() return {"items": [item.json() for item in items]}
The problem is that when I send a GET request to the endpoint I get a
404 response.
When I change the endpoint to
api.add_resource(ItemsList, '/privateapi/item/<string:tag>') and then query the API I get the desired response.
However, I would prefer to use the query approach since I am not trying to GET a single record but want to return an array. I just think it makes more sense this way.
What could I try?
Solution
In flask, query parameters aren’t used to match routes (only the path part of the URL is relevant). When you write:
api.add_resource(ItemsList, '/privateapi/item?query=<string:tag>')
You have created a route that will never match (well, not exactly; see below).
You access query parameters in the
request.args value, like this:
from flask import Flask, request from flask_restful import Resource, Api app = Flask(__name__) api = Api(app) class ItemsList(Resource): def get(self): query = request.args.get("query") return f"Query expression was: {query}" api.add_resource(ItemsList, "/privateapi/item") if __name__ == "__main__": app.run(debug=True)
With the above code, if I write:
curl
I get as the response:
"Query expression was: appl"
When I said "You have created a route that will never match" this was actually a bit of lie. In fact, you have created a route that requires a literal
? in the URL path, so if you were to make a request for this URL:
curl
It would work, but it’s not what you want.
This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 | https://errorsfixing.com/what-is-the-correct-way-to-handle-queries-for-a-flask-restful-endpoint-2/ | CC-MAIN-2022-33 | refinedweb | 358 | 66.44 |
Hi Dima,
But isn't it a bottleneck then?
Our throughput limited by a single namenode server?
Sincerely,
Alexandr
On Fri, Aug 19, 2016 at 9:57 PM, Dima Spivak <dspivak@cloudera.com> wrote:
> As far as I know, HBase doesn't support spreading tables across namespaces;
> you'd have to point it at one namenode at a time. I've heard of people
> trying to run multiple HBase instances in order to get access to all their
> HDFS data, but it doesn't tend to be much fun.
>
> -Dima
>
> On Fri, Aug 19, 2016 at 11:51 AM, Alexandr Porunov <
> alexandr.porunov@gmail.com> wrote:
>
> > Hello,
> >
> > I am not sure how to do it but I have to configure federated cluster with
> > hbase to store huge amount of messages (client to client) (40% writes,
> 60%
> > reads). Does somebody have any idea or examples how to configure it?
> >
> > Of course we can configure hdfs in a federated mode but as for me it
> isn't
> > suitable for hbase. If we want to save message from client 1 to client 2
> in
> > the hbase cluster then how hbase know in which namespace it have to save
> > it? Which namenode will be responsible for that message? How we can read
> > client messages?
> >
> > Give me any ideas, please
> >
> > Sincerely,
> > Alexandr
> >
>
>
>
> --
> -Dima
> | http://mail-archives.apache.org/mod_mbox/hbase-user/201608.mbox/%3CCAB-_WnFj345S3LAbWOchpRMDczghw7C2VUBN+s2+Ei=P5b7Q1Q@mail.gmail.com%3E | CC-MAIN-2018-09 | refinedweb | 218 | 72.36 |
As promised, I am going to add FreeRTOS to the bouncing ball application. Why? So I can explain how to do it and create multiple bouncy ball tasks! Also, I get to leave you guys with a teaser question for next time!
Start by opening the Library Manager and adding both the freertos and abstraction-rtos libraries, as shown. You need the abstraction because the Segger emWin port (in the CY8CKIT-028-TFT library) is built to be able to run on any supported kernel, such as FreeRTOS and RTX (inside Mbed OS).
That's easy enough. Now we need to configure the RTOS. Start by copying the example FreeRTOSConfig.h file from libs/freertos/Source/portable to your top-level folder. If you do not make a copy then the build system will pick up the default version, which is fine in many cases, but I need to make a few changes. In your copy of the file hunt down these definitions and check/change them to the values below.
#define configUSE_MUTEXES 1
#define configUSE_RECURSIVE_MUTEXES 1
#define configHEAP_ALLOCATION_SCHEME (HEAP_ALLOCATION_TYPE2)
The mutex choices are an emWin requirement when using FreeRTOS. If you forget to set those values the application will fail to build because the abstraction-rtos requires them (the error is "cyabs_rtos_freertos.c:280: undefined reference to `xSemaphoreCreateRecursiveMutex'").
The heap choice is a means of choosing how memory is allocated and freed when you create and delete resources like tasks and semaphores. The default is type 1, which does not support freeing at all. That sounds a little dumb but, in reality, many applications just grab the memory and never free it so this is a memory optimization. My application is going to create and delete a task and so I chose a slightly more sophisticated scheme.
To complete the RTOS configuration, delete this line so you do not get swamped with warnings.
#warning This is a template. Copy this file to your project and remove this line. Refer to FreeRTOS README.md for usage details.
That's sorted out the RTOS. Now let's tell emWin to use it. Open the Makefile and find the COMPONENTS line. Change the EMWIN to use OS mode, instead of NOS, and tell the abstraction-rtos to support FreeRTOS. it should look like this:
COMPONENTS=EMWIN_OSNTS FREERTOS
Now open main.c so we can turn the bounce() function into a task. Add two include files for the OS near the top of the file.
#include "FreeRTOS.h"
#include "task.h"
Now scroll down and change the bounce() function argument to match the FreeRTOS definition of TaskFunction_t, which is a void function accepting a void* argument. The original looks like this:
void bounce( ball_t* ball )
It already returns a void and so, to make it a FreeRTOS task, just change the argument like this:
void bounce( void* arg )
When I create the task I shall pass in a pointer to the ball configuration as an argument and so the first line of the new function should create a new ball pointer variable.
ball_t* ball = (ball_t*) arg;
The last change to my task is to use the FreeRTOS delay function, which does not hog the CPU like Cy_SysLib_Delay(). The good news is that the FreeRTOS configuration uses a 1ms tick, so the speed calculation does not change, only the function name.
vTaskDelay( speed );
Still with me? I hope so, but the final main.c file is attached, just in case.
When used in OS mode the emWin REQUIRES all calls to come from a task (i.e. after the OS is started) so we need to create a new task to initialize the GUI and start the bouncing ball tasks. You will not have a good time if you try to call GUI_Init() from main(). Create a new function above main() – I called mine GUI_Init_Task() because it is a task that initializes the GUI. I can be quite literal sometimes!
void GUI_Init_Task( void* arg )
{
TaskHandle_t thisTask = xTaskGetCurrentTaskHandle();
unsigned int pri = uxTaskPriorityGet( thisTask );
pri--;
/* Turn on the OLED and set the background color */
GUI_Init();
GUI_SetBkColor( GUI_GRAY );
GUI_Clear();
/* Define the ball initial conditions */
static ball_t b1 = { GUI_RED, 3, 45, +1 /*RIGHT*/, -3 /*UP*/ };
xTaskCreate( bounce, "b1", configMINIMAL_STACK_SIZE*2, &b1, pri, NULL );
vTaskDelete( thisTask );
}
What is going on here? First of all I am getting the task handle from the OS, then I am using it to get the priority of the task (which will get set when I create it in a moment). Then I decrement that value because I want to create a lot of bounce tasks at a lower priority than this task. This means GUI_Init_Task() will create them all but not let them run until it is good and ready!
Then, I initialize the GUI with three lines code ( GUI_* calls) that I just moved from main().
Also stolen from main() is the ball definition from main(). Instead of calling the bounce function directly, though, I create a task.
xTaskCreate( bounce, "b1", configMINIMAL_STACK_SIZE*2, &b1, pri, NULL );
Let’s go through all those arguments for creating the task. The first one is the bounce() function. Second is the name, which can prove handy when debugging. Next is the amount of heap I want to get allocated for the bounce function stack – I chose 2x the absolute minimum (that's not a lot but this is a pretty simple task). The fourth argument is the cool part - I am passing in a pointer to the ball, which will get passed into the bounce() function when it starts. Then I specify the priority that was calculated above. The last argument is a pointer to get the task handle but I just pass in NULL because I do not need it.
So now I have a new task waiting to run. How to "release the hound"? I just delete the current task and the new one will automatically get scheduled.
vTaskDelete( thisTask );
The vTaskdelete() call frees the memory allocated to GUI_Init_Task and so his is why I needed the upgraded heap option in the config file.
We're close now... just need to create our temporary task and start the OS. Here is the code for that (note that it still sets up the random number seed with the ADC):
int main( void )
{
CY_ASSERT( cybsp_init() == CY_RSLT_SUCCESS );
__enable_irq();
initRandomNumber( P10_7 );
xTaskCreate( GUI_Init_Task, "GUI", configMINIMAL_STACK_SIZE*2, NULL, 5, NULL );
vTaskStartScheduler();
}
Cool, now build and program the board. It's... just the same as the non-OS version! So, before you start making effigies of me and sticking pins into my eyes for wasting your time, let's do something that would have been difficult before... add another ball or two or three.
In the GUI_Init_Task() function all you have to do is copy the ball variable (b1), rename it to b2, and change some of the arguments. Then copy the task creation line to start the task. Just change the name of the task to "b2" and the task argument from &b1 to &b2.
static ball_t b1 = { GUI_RED, 3, 45, +1 /*RIGHT*/, -3 /*UP*/ };
static ball_t b2 = { GUI_BLUE, 5, 35, -2 /*LEFT*/, +1 /*DOWN*/ };
xTaskCreate( bounce, "b1", configMINIMAL_STACK_SIZE*2, &b1, pri, NULL );
xTaskCreate( bounce, "b2", configMINIMAL_STACK_SIZE*2, &b2, pri, NULL );
Build and program the kit and you have two balls starting at different positions, of different sizes, speeds, and directions (depending upon what you changed in the variable definition).
I think this is a lot of fun. You can now create half a dozen balls if you wish. It's a nice way of using the same task function multiple times, on different data, with just two extra lines of code.
Go on, have a play with the speed and radius of the balls. I left a defect in the application that I am going to fix next time. Can you find it? You should see it with lots of fast balls bouncing around (Hint: change the background color of the screen in GUI_Init_Task() to BLACK so it is easier to see what happens when the balls cross paths). | https://community.cypress.com/community/modustoolbox/blog/2020/01 | CC-MAIN-2020-10 | refinedweb | 1,341 | 71.44 |
voice key recording
Dear all,
I need help implementing a voice key as response. Does this function still exist?
Many thanks!
Katharina
Dear all,
I need help implementing a voice key as response. Does this function still exist?
Many thanks!
Katharina
Hi Katherina,
There is no "official" voice key function in Opensesame, so you probably have to some coding yourself. Nevertheless, there are a couple of versions floating around the forum. How about you give it a browse?
For example, this one here might be a good starting point:
Eduard
Hi Katherina,
If you use psychopy for the backend, you can use its official voice recording module:
You will need to have to do some scripting then though, and you need a very recent version of OpenSesame as this functionality has only recently been added to psychopy too.
Thank you very much for the answers. I use Expyriment as the backend.
I remember there was a sund_start_recording plugin and even an element for that.
I did not find it anymore.
@eduard I am sorry, I did not see your reply from March.
In fact, what I need is just set an event (reply given) by recording a sound, so that I can anylize the data response locked. Is there a simpler way to do it maybe without having to code a huge inline? Thanks
I have now browsed some examples on this forum, also the suggested one, but unfortunately, I did not help. I must confess, I am not skilled at programming though. Therefore I have several questions:
3.As far as I understand it "listens" to the sound and if its louder than the threshhold it records.
I don't know how this would work in my setup: I present a sound (50 ms) every 2 seconds and participants should say a word right after they hear a sound. Theoretically, they can already start saying while the sound is playing. I need the voice response onset and offset times rather then the reaction times. I measure their grip force while they do this contineously. The onset time will be used to lock the grip force data to it in the analysis (similar as in EEG for example). I cant see how this would work. Maybe I do not understand something.
I really appreciate your help and guidance.
You also wrote that its is better to record speech with some other device. Could you suggest some? Is this something you meant?
Thanks a lot!
Hi Katherina,
If timing is important, voice key could be a problem as it is not particular reliable/accurate (just imagine all the unrelated sounds/soft voices/ coughs/ etc that could cause misses and false alarms). So, maybe it might even to make more sense to use some other response measure. I haven't tried it, but the code that you posted looks like something that can be used to measure response onset (you understand correctly, the response is triggered if the volume exceeds a threshold). You have to put it in an inline_script item probably in the run phase and place this inline_script in the sequence after the item that you want your participants to respond to.
Hope this helps a little,.
Hi Katharina,
Given that you want to run the study online, it might be better to start with javascript right away, rather than Python. But anyway, on Cedrus, they publish example python code how to connect the response box. YOu can find it here:
A starting point would be to try to understand and reproduce the code and eventually to adapt it to your needs.
As for javascript, there doesn't seem to be code published by them, but maybe it is possible to access the protocols that they list with javascript, but unfortunately, I wouldn't know how., @eduard
This is an other study I want to run online, this one will be normal offline desktop study.
I will try the code posted here.
Sorry to bother you again.
Can you explain when the clock.time starts to count in this setup?
What I need is actually the start_time (and Rt as well but start_time is more important)
many thanks!
in parallel, I would also like to save the voice data recordings in a wav file. Itried to import wave but it didnt work.
Hi Katharina,
Sorry for the late reply.
Start time of what? The trial or the voice? if you want to measure if from the start of the trial, that the way you do it, looks good to me. Provided that before the line
start_time=clock.time()there is nothing else and the code is put in the run phase of the inline_script,
start_timewould correspond to the moment Opensesame finishes with the sampler item. Depending on the settings of its duration parameter this can either be after the sound finished playing (sound), or after initializing playing the sound.
Aside of that, voice keys are not really good RT measures (a lot of background noise, etc. ), so don't put to much hope to get as accurate results as with keyboards, let alone button boxes.
What packages do you use to record the voice? Pyaudio? Information on how to export recorded sounds are probably explained there. This link looks also promising.
Hope this helps,
Eduard
Thank you for your answer.
I have now tried
in the beginning of the inline with voice key
import pyaudio
import struct
import math
# A low threshold increases sensitivity, a high threshold
# reduces it. We need to play around with it.
sound_threshold = 0.05
# Maximum response time
timeout = 945
but I get an error
IOError: [Errno 13] Permission denied: 'output.wav'
What do you think this could be? I use Expyriment as background
Hi Katherina,
You don't seem to have permission to write files where you are trying to write the files to. Can you specify a full pah to your desktop?
I don't know what exactly it is (also depending on your operating system, but something like:
C://katherina/Desktop/output.wav?
Eduard
thanks for a speedy reply @eduard
It's
C:\Users\Katharina Kühne\Desktop
So? Does it work if you specify that path?
Still get an error
Not sure, but I think a way is by using the soundFile library. In any case, you can check out this documentation here:
in particular (recording with arbitrary duration).
It probably also works with plain python methods, but I wouldn't know fromthe top of my head, what the problem is with your code.
Eduard
I would be all right when it just records 1 s, the responses are around 500 ms, I can cut the rest if I want but it's more of a document WHAT they say not how.
I believe I am placing that in a wrong place
sorry for the naive questions
Hi Katherina,
I just checked the code and it works fine. It writes a file after a file is recorded for 3 seconds. However, it does not produce a voice key, so there is nothing happening once you start speaking.
For that you would indeed need a voice key plugin, like the one provided in psychopy (which you can import in Opensesame and use).
Sorry for asking again, but can you summarize again, what exactly it is that you need, and what it is that doesn't work.
As far as I understand, you want in any case a wave file. That should be easily possible with the sounddevice code. If it doesn't work, then I need more information on what the problem is. Are there error messages? Is a file produced?
Thanks,
Eduard
Thanks a lot!
I always get
IOError: [Errno 13] Permission denied: 'output.wav'
I specified the path and everything but still. Having reponse a s voice key I finally managed :) thanks to you all
but now it would be nice to have a wave file.
Do you have anything special installed on your machine?
Many thanks!
aha, okay. So, only the write file issue left.
The error message is probably misleading. You probably have the rights to write files to your desktop, but the problem is related to the wrong paths or folders that don't exist. It is hard to really clearly tell you what to do. You can check some online resources that might help addressing the error: for example:
I hope this helps,
Eduard
Thanks a lot! I realized that this very error was because of my account name with ü, changing the account did not change the path. I recorded it on a neutral USB stick and it worked. But! It only records the very last trial.
Moreover, it somehow enormly enlarged the presentation time - from 2 to 4 ms!
Thank you for your patience and help.
I mean, seconds of course :)
Yeah, Umlaute (as well as white space) is not ideal for file names and directories. I'd change that to something simpler (e.g. katkue?). But nice that it is fixed.
The delay might come from running it on the USB. However, a doubling of the presentation time is unlikely to come from a device lag. Probably, something else is happening. Can you try to change the presentation time to other values (0.5, 1, 4 seconds) and check whether the actual times are still double each time? If so, then somewhere in your code there is probably some accidental doubling happening, e.g. calling one thing twice. If you share your code or experiment, I could have a look then.
Eduard
Many thanks!
The file is included.
It's actually a mega easy experiment, with a mentronome sound coming every (ideally) 2 seconds, and participants having to say smth after the sound. This sould be recorded as voice_onset time and the wav.file
have you checked the consistency of the delay as I suggested? | https://forum.cogsci.nl/discussion/comment/19763/ | CC-MAIN-2021-31 | refinedweb | 1,650 | 74.19 |
Sigh.
There are three distinct things here.
- API definition - something we do all the time.
- "Defensive Programming" - something that may or may not actually exist.
- Paranoid Schizophrenic programming - a symptom of larger problems; this exists far too often.
It's not that complicated, there's a simple 3-element checklist for API design. Unless "someone" is out to break your API. Whatever that means.
A related topic is this kind of thing on Stack Overflow: How Do I Protect Python Code? and Secure Plugin System For Python Application.
Following the Rules
When we define an API for a module, we define some rules. Failure to follow the rules is -- simply -- bad behavior. And, just as simply, when someone breaks the API rules, the module can't work. Calling the API improperly is the as same as trying to install and execute a binary on the wrong platform.
It's the obligation of the designer to specify what will happen when the rules are followed. While it might be nice to specify what will happen if the rules are not followed, it is not an obligation.
Here's my canonical example.
def sqrt( n ):
"""sqrt(n) -> x such that x**2 == n, where n >= 0."""
The definition of what will happen is stated. The definition of what happens when you attempt sqrt(-1) is not defined. It would be nice if sqrt(-1) raises an exception, and it would be nice to include that in the documentation, but it isn't an obligation of the designer. It's entirely possible that sqrt(-1) could return 0. Or (0+1j). Or nan.
Item one on the checklist: define what the function will do.
And note that there's a world of difference between failing, and being used improperly. We're talking about improper use here; failure is unrelated.
Complete Specification
When I remind people that they are only obligated to specify the correct behavior, some folks say "That's just wrong! An API document should specify every behavior! You can't omit the most important behavior -- the edge cases!"
Ummm... That position makes no sense.
There are lots and lots of situations unspecified in the API documentation. What about sqrt(2) when the underlying math libraries are mis-installed? What about sqrt(2) when the OS has been corrupted by a virus in the math libraries? What about sqrt(2) when the floating-point processor has been partially fried? What about sqrt(2) when the floating-point processor has been replaced by a nearly-equivalent experimental chipset that doesn't raise exceptions properly?
Indeed, there are an infinite number of situations not specified in the API documentation. For the most part, there is only one situation defined in the API documentation: the proper use. All other situations may as well be left unspecified. Sometimes, a few additional behaviors are specified, but only when those behaviors provide value in diagnosing problems.
Diagnosing Problems
An API with thoughtful documentation will at least list the exceptions that are most likely to be raised. What's important is that it does not include an exhaustive list of exceptions. Again, that's an absurd position -- why list MemoryError on every single function definition?
What's important about things like exceptions and error conditions is the diagnostic value of this information. A good designer will provide some diagnostic hints instead of lots of words covering every "possible" case.
If there's no helpful diagnostic value, don't specify it. For example, there's little good to be done by adding a "Could raise MemoryError" on every method function description. It's true, but it isn't helpful. Except in a rare case of an API function that -- if used wrong -- will raise a MemoryError; in this rare case you're providing diagnostic information that can be helpful. You are overwriting the API, but you're being helpful.
Item two on the checklist: provide diagnostic hints where they're actually meaningful and helpful.
Error Checking
How much error checking should our sqrt() function do?
- None? Just fail to produce an answer, or perhaps throw an exception?
- Minimal. This is easy to define, but many folks are unhappy with minimal.
- More than minimal but not everything. This is troubling.
- Everything. This is equally troubling.
No error checking is easiest. And it fits with our philosophy. If our sqrt function is used improperly -- i.e., someone broke the rule and provided a negative number -- then any exception (or nan value) will propagate to the caller and we're in good shape. We didn't overspecify -- we provided a wrong answer when someone asked a wrong question.
Again, we're not talking about some failure to process the data. We're talking about being called in a senseless way by a client that's not following the rules.
There's a subtlety to this, however.
A Non-Math Examples
Yesterday, I tried to use a postal scale to measure the temperature in my oven. The scale read 2.5 oz.
What does that mean?
I asked an ill-formed question. I got something back. It isn't an answer -- the question was ill-formed -- but it looks like an answer. It's a number where I expected a number.
Here's another one. "Which is heavier, the number 7 or the color green?" Any answer ("7", "green" or "splice the main brace") is valid when confronted with a question like that.
Perhaps I should have run a calibration (or "unit") test first.
The Termination Question
In the case of a function like square root, there is an additional subtlety. If we're using logarithms to compute square root, our log function may raise an exception for sqrt(-1) or it may return nan; either of which work out well - an ill-formed question gets an improper answer.
However, we might be using a search algorithm that will fail to terminate (a bisection algorithm, or Newton's method, for example.) Failure to terminate is a much, much worse thing. In this case -- and this case only -- we have to actually do some validation on the range of inputs.
Termination is undecidable by automated means. It's a design feature that we -- as programmers -- must assert independently of any lint, compiler or testing discipline.
Note that this is not "defensive programming". This is ordinary algorithm design. Every loop structure must terminate. If we're trying a simple bisection algorithm and we have not bracketed a root properly (because, for example, it's a complex number), the bisection won't terminate. A root-finding bisection algorithm must actually do two two things to assure termination: check the range of the inputs and limit the number of iterations.
This isn't defensive programming because we're not checking that a mysterious "someone" is abusing the API. We're asserting that our loop terminates.
Item 3 on the checklist: reject values that would lead loops to not terminate properly.
def sqrt( n ):
"""sqrt(n) -> x; such that x**2 == n; where n >= 0"""
assert n >= 0
Incorrect Error Checking
Once we start checking for loop termination, folks say that "we're on a slippery slope" and ask where's that "fine line" between the minimal level of error checking (loops will terminate) and the paranoid schizophrenic level of error checking.
It isn't a slope. It's a cliff. Beyond loop termination, there's (almost) nothing more that's relevant.
By "almost", I mean that languages like Python have a tiny realm where an additional assertion about the arguments is appropriate.
Because of duck typing, many algorithms in Python can be written very generically. Very generically. Sorting, for example, can be applied to lists of -- almost -- anything. Except, of course, it isn't meaningful for things with no useful __cmp__ function. And in the case of things like a dictionary, what's the basis for comparison?
In the case of dynamic languages and duck typing, it's possible that an algorithm will terminate, producing a wrong answer. (BTW, this one reason why Python has / and // as distinct division operators -- to assure that ints and floats can be used interchangeably and the algorithm still works.)
Item 4 on the checklist: When you have a known problem with a type, reject only those types that are a problem. This is very rare, BTW. Mostly it occurs with overlapping types (lists and tuples, floats and ints.) Most well-designed algorithms work with a wide variety of types. Except in the overlapping types situation, Python will raise exceptions for types that don't work; make use of this.
What About "Business Rules"?
By "business rules" most people mean value ranges or codes that are defined by some externality. As in "the claim value must be a number between the co-pay and the life-time limit".
This is not a "Defensive Programming" issue. This is just a policy statement written into the code. Your API won't break if the claim value is less than the co-pay. Your users will be pissed off, but that's a separate problem.
Also, you rarely raise an exception for business rules. Usually, you'll collect business rule violations into a formal error report or log. For example, Django's Forms will collection a dictionary of validation errors. Each element in the dictionary has a list of problems with a particular field on the form.
What About "Someone" Who Can't Use The API?
Here's where the conversation goes awry.
First, if this is a hypothetical "someone", you need to relax. Consider these use cases. Are you worried that "someone" will download your software, install it, configures it, start to use it, and refuse to follow the documented API? Are you worried that they will send you angry emails saying that they insist on doing the wrong thing and your software doesn't work? You don't need "defensive programming", you need to either add the features they want or steer them to a package that does what they're expecting.
Here's another version of a hypothetical someone: you're working as part of a larger team, and you provide a package with an API. Are you worried that a team member will refuse to follow the documented API? Are you worried that they will send you angry emails saying that they insist on doing the wrong thing and your software doesn't work? This isn't a call for "defensive programming," this is a call for a conversation. Perhaps you built the wrong thing. Perhaps you API documentation isn't as crystal-clear as you thought.
Someone Really Is Using It Wrong
A common situation is someone who's actually using the API wrong. The conversation didn't help, they refuse to change their software. Or you can't easily call them out on it because -- for example -- your boss wrote detailed specs for you, which you followed, but someone else isn't following. What can you do? The specification contradicts the actual code that uses the API.
Is this a place where we can apply "Defensive Programming"?
Still no.
This is a call for some diagnostic support. You need error messages and logs that help you diagnose the problem and locate the root cause.
Root Causes
The issue with "Defensive Programming" is that it conflates two unrelated use cases.
- API Design.
- Unwilling (or unable) to Follow Instructions. (UFI™)
API design has four simple rules.
-.
Sociopaths
If (1) someone refuses to follow the rules and (2) complains that it's your API and (3) you elect to make changes, then...
First, you can't prevent this. There's no "defensive programming" to head this off.
Second, know that what you're doing is wrong. Making changes when someone else refuses to follow the rules and blames you is enabling someone else's bad behavior. But, we'll assume you have to make changes for external political reasons.
Third -- and most important -- you're relaxing the API to tolerate ordinarily invalid data.
Expanding What's "Allowed"
When someone refuses to follow the API -- and demands you make a change -- you're having this conversion.
Them: "I need you to 'handle' sqrt(-1)."
You: "Square Root is undefined for negative numbers."
Them: "I know that, but you need to 'handle' it."
You: "There's no answer, you have to stop requesting sqrt(-1)."
Them: "Can't change it. I'm going to make sqrt(-1) requests for external political reasons. I can't stop it, prevent it or even detect it."
You: "What does 'handle' mean?"
At this point, they usually want you to do something that lets them limp along. Whatever they ask you to do is crazy. But you've elected to cover their erroneous code in your module. You're writing diagnostic code for their problem, and you're burying it inside your code.
If you're going to do this, you're not doing "defensive programming", you're writing some unnecessary code that diagnoses a problem elsewhere. Label it this way and make it stand out. It isn't "defensive" programming. It's "dysfunctional co-dependent relationship" programming. | http://slott-softwarearchitect.blogspot.com/2009/05/paranoid-schizophrenic-programming.html | CC-MAIN-2017-17 | refinedweb | 2,199 | 67.04 |
"?
How can I say, from the cmd line, that python should take my CWD as my CWD, and not the directory where the script actually is?
I have a python script that works fine when it sits in directory WC, but if I move it out of WC to H and put a symlink from H/script to WC, it doesn't find the packages that are in WC. Also, if I use the
absolute path to H, it won't find them, but I guess I can understand that.
Someone said on the net that python doesn't know whether a file is real or a symlink, but I think that somehow, python is able to find out where the real file is and treat that as its base of operations.
Python does use your current working directory as your current working directory. I think you are misdiagnosing the problem.
Here's a demonstration:
$ cat test.py
import os
print os.getcwd()
$ ln -s ~/test.py /tmp/test
$ ls -l /tmp/test
lrwxrwxrwx 1 ... ... 19 May 16 18:58 /tmp/test -> /home/steve/test.py
$ cd /etc/
$ python /tmp/test
/etc
The obvious solution is to make sure that WC is in the Python path. You can do that by adding it the environment variable PYTHONPATH, or by adding it to sys.path at the start of your script. I think you can also use a .pth file as well.
Another solution may be to add this to the beginning of your script:
os.setcwd('path/to/WC')
but that's a crappy solution, you generally don't want to be changing the working directory from inside scripts if you can avoid it.
Symlinks can find their targets, but targets have absolutely no way of knowing where symlinks to them are. It's one-way. It would work if the actual file were in WC and you created a symlink inside H.
How can write a python script which will execute some perl scripts in a gui and shouldn't be post processed....?
Basically, is it possible to compile multiple unrelated python scripts into a single exe file, so when execute it several python programs are run at once.
In order to use this on another machine without python installed and only by using one single file.
Python scripts can run without a main(). What is the advantage to using a main()? Is it necessary to use a main() when the script uses command line arguments? (See script below)
#!/usr/bin/python
import sys
def main():
# print command line arguments
for arg in sys.argv[1:]:
print arg
if __name__ == "__main__":
main()..
Forgot Your Password?
2018 © Queryhome | https://www.queryhome.com/tech/693/executing-python-scripts-that-are-symlinked | CC-MAIN-2021-10 | refinedweb | 448 | 74.29 |
Shuffle two list at once with same order
I'm using nltk corpus movie_reviews where are a lot of documents. My task is get predictive performance of these reviews with pre-processing of the data and without pre-processing. But there is problem, in lists
documents and
documents2 I have the same documents and I need shuffle them in order to keep same order in both lists. I cannot shuffle them separately because each time I shuffle the list, I get other results. That is why I need to shuffle the at once with same order because I need compare them in the end (it depends on order). I'm using python 2.7
Example (in real are strings tokenized, but it is not relative):
documents = [(['plot : two teen couples go to a church party , '], 'neg'), (['drink and then drive . '], 'pos'), (['they get into an accident . '], 'neg'), (['one of the guys dies'], 'neg')] documents2 = [(['plot two teen couples church party'], 'neg'), (['drink then drive . '], 'pos'), (['they get accident . '], 'neg'), (['one guys dies'], 'neg')]
And I need get this result after shuffle both lists:
documents = [(['one of the guys dies'], 'neg'), (['they get into an accident . '], 'neg'), (['drink and then drive . '], 'pos'), (['plot : two teen couples go to a church party , '], 'neg')] documents2 = [(['one guys dies'], 'neg'), (['they get accident . '], 'neg'), (['drink then drive . '], 'pos'), (['plot two teen couples church party'], 'neg')]
I have this code:
def cleanDoc(doc): stopset = set(stopwords.words('english')) stemmer = nltk.PorterStemmer() clean = [token.lower() for token in doc if token.lower() not in stopset and len(token) > 2] final = [stemmer.stem(word) for word in clean] return final documents = [(list(movie_reviews.words(fileid)), category) for category in movie_reviews.categories() for fileid in movie_reviews.fileids(category)] documents2 = [(list(cleanDoc(movie_reviews.words(fileid))), category) for category in movie_reviews.categories() for fileid in movie_reviews.fileids(category)] random.shuffle( and here shuffle documents and documents2 with same order) # or somehow
You can do it as:
import random a = ['a', 'b', 'c'] b = [1, 2, 3] c = list(zip(a, b)) random.shuffle(c) a, b = zip(*c) print a print b [OUTPUT] ['a', 'c', 'b'] [1, 3, 2]
Of course, this was an example with simpler lists, but the adaptation will be the same for your case.
Hope it helps. Good Luck.
From: stackoverflow.com/q/23289547 | https://python-decompiler.com/article/2014-04/shuffle-two-list-at-once-with-same-order | CC-MAIN-2020-05 | refinedweb | 388 | 67.25 |
Tutorial: Creating an Angular 2.0 Todo App
Introduction
With Angular 2.0 right around the corner, big changes are in store. If you would like to understand the differences between Angular 1 and Angular 2 check out this blog post! For the past few years, building a Todo Application has been used as the “Hello World” of understanding a new framework, with just enough meat to get you going. If you already understand Angular 2.0 and just want a working example, visit the finished application here.
Disclaimer: Angular 2.0 is currently in Beta and the syntax is subject to change.
In this tutorial you will:
- Explore the base of an Angular 2.0 application
- Create and Angular 2.0 component
- Add a new route
- Implement Todo add/remove logic
Step 1: The Base
Angular 2.0 is written in TypeScript, a superset of the ES6 javascript standards. Although TypeScript is not required to build an Angular 2.0 application, the following examples will be written exclusively in TypeScript.
In order to get started the right foot, this tutorial with begin with a blank Angular 2.0 template. This includes a gulp build and the configurations you need to transpile the TypeScript. Since the usage of these common build tools are out of the scope of the this blog post, you might want to read up on Gulp, TSD, and Karma.
To get started, run the following commands:
Step 2: Creating a new Component
First let’s create a new directory called ‘
todo' under ‘
app/components.' In the ‘
todo' directory add two new files called ‘
todo.component.ts' and ‘
todo.html,’ respectively. Combined, these two files will make up our component.
todo.component.ts
Explanation of todo.component.ts
The ‘
@Component' annotation defines three properties: ‘
selector', ‘
templateUrl', and ‘
directives.’ The ‘
selector' property is the name that the component will bind to. In this case, whenever the ‘
<todo>' tag is used, this component will be injected. As you might have guessed, the ‘
templateUrl' defines the html template that will be injected in place of the selector tag. Finally, the ‘
directives' property defines any external directives that are used in the template. In this case, we import and use ‘
CORE_DIRECTIVES.’ This includes the basic directives built into Angular 2.0. The second part of this file defines the TypeScript class that will bind to the html file. In this case, we define two placeholder methods which we will fill in later.
todo.html
Explanation of todo.html
This HTML file, although mostly bootstrap style has a few key Angular 2.0 elements:
<form (submit)="add(newtodo.value)">
When the form is submitted, the add function will be called with the value of the local variable ‘
newtodo.’ This local variable is bound to the input text box.
*
Using the core directive, ‘
*ng-for' we can iterate through the list of todos and assign each to an element ‘
todoitem.’ This is very similar to an ‘
ng-repeat' in Angular 1.x.
<button id="todo-remove" (click)="remove(todoitem)"
On the click of the button, the ‘remove’ function is called with the selected ‘
todoitem' in the list
Step 3: Adding a Route
Let’s add a new route for our Todo Page. In Angular 2.0, routes are added using the ‘
@RouteConfig' annotation. Locate ‘
app.ts' in the project, and add:
{ path: '/todo', component: TodoComponent, as: Todo }
Since we are using a new component in ‘
app.ts' we also need to be sure to import the component:
import {TodoComponent} from '../todo/todo.component';
When you are completed, ‘
app.ts' should look like this:
Then, add the link to the navbar. After editing, ‘
app.html' should look like this:
Step 4: Adding the Todo Logic
Add these functions to ‘
todo.component.ts'
Explanation
The add function is fairly straightforward, as we push the input string onto the array. We return false from this function so that we can also use the ‘Enter’ key to submit a todo. The remove function finds the first occurrence of to string and removes it from the array.
Wrap up
Congratulations on making your first Angular 2.0 Todo Application! You now understand the basic concepts of building TypeScript, creating a component, and setting up a route. If you run into any issues, please leave a comment, or submit an issue on GitHub.
One thought on “Tutorial: Creating an Angular 2.0 Todo App”
Hi Jake!
Thanks for your tutorial, although I am sorry to say I am not able to use it.
The code for todo.component.ts doesn’t seem to be working, it gave me 5 errors right away:
Error:(1, 1) TS1148: Cannot compile modules unless the ‘–module’ flag is provided.
Error:(1, 42) TS2307: Cannot find module ‘angular2/core’.
Error:(3, 1) TS1205: Decorators are only available when targeting ECMAScript 5 and higher.
Error:(10, 14) TS1219: Experimental support for decorators is a feature that is subject to change in a future release. Specify ‘–experimentalDecorators’ to remove this warning.
Error:(13, 26) TS2355: A function whose declared type is neither ‘void’ nor ‘any’ must return a value or consist of a single ‘throw’ statement.
I followed all steps it was working smooth but after adding logic its not working .
Hi Asit,
What error(s) are you getting after adding those two functions? | https://objectpartners.com/2015/12/15/tutorial-creating-an-angular-2-0-todo-app/?replytocom=33215 | CC-MAIN-2020-45 | refinedweb | 888 | 67.35 |
Ad-hoc parsing using parser combinators
If you work for any given amount of time as a software developer, one problem you’ll end up with is parsing structured text to extract meaningful information.
I’ve faced this issue more than I care to remember. I even think that’s why I learned regular expressions in the first place. But, as Internet wisdom will teach you regexes are only suitable for a subset of the parsing problems out there. We learn it at school: grammars are the way to go. Still, it is my deep belief that most engineers, when faced with a parsing problem, will first try to weasel their way out using regular expressions. Why? Well…
(do try that trick with the forks, it’s awesome)
There are many tools you can use to generate parsers in various languages. Most of those involve running some custom compilation step over your grammar file, from which a set of source code files will be produced. You’ll have to include those in your build, and then figure out a way to use them to transform your input text into a meaningful structure. Hmm. Better go fetch the duct tape.
Enter parser combinators. As Wikipedia states, parser combinators are essentially a way of combining higher order functions recognizing simple inputs to create functions recognizing more complex input. It might all sound complex and theoric, but it’s in fact pretty simple. Here’s an example: suppose I have two functions able to recognize the
( and
) tokens in some text input. Using parser combinators, I could assemble those to recognize a sequence in that input made up of an opening parens and then a closing one (of course in the real world you’d want stuff in between too).
Still too complex? Let’s see some real world code now. Let’s implement a simple parser for performing additions and subtractions (ex:
1 + 2 + (3 + 4) + 5). I’ll use Scala because it’s base library comes with built-in support for parser combinators, but similar functionality is available for other languages too.
First, let’s define a few classes to hold our AST:
abstract class Expression { def evaluate(): Int } case class Number(value: Int) extends Expression { def evaluate() = value } case class Parens(expression: Expression) extends Expression { def evaluate() = expression.evaluate() } case class Addition(left: Expression, right: Expression) extends Expression { def evaluate() = left.evaluate() + right.evaluate() } case class Substraction(left: Expression, right: Expression) extends Expression { def evaluate() = left.evaluate() - right.evaluate() }
For the curious, a case class in Scala is essentially a shorthand for immutable classes holding a few properties. They automatically come with an proper
equals and
toString implementation (among other things). They are perfect for this purpose.
Now here’s the parser that goes with it:
object SimpleExpressionParser extends RegexParsers { def parse(s: String): Expression = { parseAll(expression, s) match { case Success(r, _) => r case NoSuccess(msg, _) => throw new Exception(msg) } } val expression: Parser[Expression] = binary | parens | number val parens = "(" ~ expression ~ ")" ^^ { case "(" ~ e ~ ")" => Parens(e) } val binary = (number | parens) ~ operator ~ expression ^^ { case l ~ "+" ~ r => Addition(l, r) case l ~ "-" ~ r => Substraction(l, r) } val number = regex( """\d+""".r) ^^ { case v => Number(v.toInt) } val operator = "+" | "-" }
The parser is in fact a class (in this case a Singleton but that’s not mandatory). It’s methods define a set of rules that can be used for parsing. Notice how close those look to a typical BNF grammar? That’s right, you won’t be needing that duct tape after all.
Rules can be very simple such as
operator which recognizes simple tokens using either strings or regular expressions, or more complex such as
binary which combines other rules using special operators. Note that those operators are just methods from the base
RegexParsers class. The Scala libraries provide many operators and methods to define how parsers can be combined. In this case I’m using the
~ operator which denotes a sequence. It’s also possible to match variable sequences, optional values, and many, many others.
The return value for each rule is in fact a Parser[T] where T is the type of item that is recognized. Simple rules based on strings or regular expressions return a Parser[String] without the need for further processing. Rules that combine multiple values or that need the raw token to be transformed in some way (such as
number) can be followed by the
ˆˆ operator applied to a partial function that’ll match the recognized stuff using pattern matching and then produce the resulting value. For example,
binary returns either an
Addition or a
Substraction, which means it’s inferred return value is
Parser[Expression].
Here’s how this parser could be used:
assertEquals(1, SimpleExpressionParser.parse("1").evaluate()) assertEquals(3, SimpleExpressionParser.parse("1 + 2").evaluate()) assertEquals(15, SimpleExpressionParser.parse("1 + (2 + 3) + 4 + 5").evaluate())
The
parse method either returns an
Expression or throws an exception when a parsing error occurs. Then calling
evaluate on that expression recursively computes the expression, and returns the result.
If I were to use the
toString method on the root expression, for an input string
1 + 2 I would end up with
Addition(Number(1),Number(2)), which shows that the result from the parsing is a nice, easy to use AST.
Dealing with left recursive grammars
You might have noticed that in my example the definition for the
binary rule didn’t use
expression on the left side of the operator. Why can’t I do something like this?
def binary = expression ~ operator ~ expression
The problem with this rule is that is makes my grammar a left recursive one, and by default the Scala parser combinators don’t handle that quite well. While processing the input, the
expression rule is then called, which eventually digs into
binary, which then invokes
expression again on the same input … and then you end up with (and on) Stack Overflow.
So how to work around this? One possibility is to ensure that your grammar never recurses to the same rule without consuming at least one character (e.g. do not use left-recursion). That’s why I used a slightly more complex form in my initial sample. Another possibility when using Scala parser combinators is to mix in the
PackratParser trait into your class, which enables support for left-recursion:
object SimpleExpressionParser extends RegexParsers with PackratParsers { def parse(s: String): Expression = { parseAll(expression, s) match { case Success(r, _) => r case NoSuccess(msg, _) => throw new Exception(msg) } } val expression: PackratParser[Expression] = binary | parens | number val parens = "(" ~ expression ~ ")" ^^ { case "(" ~ e ~ ")" => Parens(e) } val binary = expression ~ operator ~ expression ^^ { case l ~ "+" ~ r => Addition(l, r) case l ~ "-" ~ r => Substraction(l, r) } val number = regex( """\d+""".r) ^^ { case v => Number(v.toInt) } val operator = "+" | "-" }
Much better isn’t it?
Conclusion
In this post I’ve only shown a very simple parser, but using the same techniques it’s possible to build much more complex ones that can process just about any kind of structured expression, without the need for external tools, and using very little code. All of sudden, recognizing complex expressions no longer becomes an issue, and this opens up many possibilities when faced with situations where custom text input is being used. So give it a try! | http://source.coveo.com/2014/10/03/ad-hoc-parsers/ | CC-MAIN-2017-13 | refinedweb | 1,216 | 52.7 |
Files, folders, __init__.py, and imports
Apologies if this question turns out to not be Pythonista-specific, but I'm having a hard time understanding why my folders + imports break when I move things in Pythonista.
When I place foo.py somewhere in Pythonista (at the top level, or within a folder, or within a folder that contains an empty init.py, or a deeper folder) what are the rules for doing an import both (a) from foo.py, and (b) of foo.py.
Is site_packages special? Any others?
Do these rules depend on where the top-level main file being run is located?
Thanks!
The
import foostatement (which internally becomes
foo = __import__("foo")) looks through all directories listed in
sys.pathfor a module with the name
foo.
sys.pathincludes a number of internal Pythonista folders holding the standard library and extensions that come with the app, as well as the main Script Library aka. Documents folder and site-packages. Any other folder will not be searched when importing. This means that having
foo.pyin either Documents or site-packages makes it importable, but having it in subfolders doesn't.
Packages are a special type of module source. Any folder containing an
__init__.pyfile is considered a package and can be imported. A package may contain any number of sub-modules and sub-packages. Say you have a folder
fooin Documents or site-packages containing the files
__init__.py,
bar.pyand
spam.py. The statement
import foowill import the
__init__.pyfile as the name
foo.
import foo.barwill first import
__init__.pyto the name
foo, then import
bar.pyas the attribute
foo.bar. (i. e. importing a submodule will also import all parent modules from top to bottom.) Do note however that importing a module will normally not import any of its submodules. Some packages (such as
numpyor
sympy) do this in their
__init__.pys for convenience, so for example writing
import numpywill automatically give you a useful set of NumPy's modules without needing to explicitly import them.
There is also a feature for imports inside packages called relative imports. For instance, in
bar.pyfrom our above example, we could write
from . import spamto import
spam.pyfrom the current package. Unlike
from foo import spamthis import will work no matter what you name the
foopackage. Relative imports only work in scripts that are not the main script though. If
bar.pycontains
from . import spamand we do
import foo.barfrom another script, the code will run fine - but if we run
bar.pyas the main script it will error because the main script is never considered part of a package. This may seem an unnecessary restriction, but is important in modules that depend on their parent packages'
__init__.pys being run beforehand.
(This is a very lengthy explanation and probably more than you asked for, but hopefully it'll answer any import questions you may have ;)
- polymerchm
This is a great resource. Thanks for all the detail. It demystifies much of what goes on in import.
Very helpful, thanks so much! | https://forum.omz-software.com/topic/1344/files-folders-__init__-py-and-imports/1 | CC-MAIN-2022-27 | refinedweb | 511 | 69.18 |
I am a new user of this fine piece of software and overall I love it so far, but I've stumbled upon a huge problem that has twisting my head for over a day. Without a fix I can't integrate to Sublime Text 2.
My problem:The terminal skips the input and shows me [Finished].
While compiling the following C code:
#include <stdio.h>
int main (void){
char ch;
ch = getchar(); /* This line is the cause of the problem, the input */
return 0;
}
My C.sublime-build:
{
"cmd": "tcc", "-run", "$file"],
"file_regex": "^ ]*File \"(...*?)\", line ([0-9]*)",
"selector": "source.c"
}
Note: My platform is Windows 7 and I use tcc for compiling C.
I've searched for an answer, but couldn't find anything. Thank you in advance!
stdin isn't connected when running via Sublime Text, so interactive input won't work
i have the problem too. | https://forum.sublimetext.com/t/c-terminal-cannot-input/4530 | CC-MAIN-2017-09 | refinedweb | 149 | 84.17 |
Originally posted by Rob Mech: Ok, i'm having a bit of a problem with the following code. I think there is just something I keep missing here. Given:
import java.util.*;
public class Mangler {
public static <K, V> Map<V, K> mangle(Map<K, V> in) {
Map<V, K> out = new HashMap<V, K>();
for (Map.Entry<K, V> entry : in.entrySet())
out.put(entry.getValue(), entry.getKey());
return out;
}
public static void main(String[] args) {
Map m1 = new HashMap();
m1.put("a", 1);
m1.put("b", 2);
Map m2 = mangle(m1);
System.out.println(m2.get("a") + " " + m2.get(2));
}
}
Regardless of the result I'm missing out on the understanding of the following code. public static <K, V> Map<V, K> mangle(Map<K, V> in) {} Ok, if <V,K> is the return type and <K,V> is the type being passed in. What is the "public static <K, V>" defining?
Originally posted by Keith Lynn: That is simply saying that you are going to have two type variables, K and V.
Originally posted by Katrina Owen: The key and value must both be Objects - so no primitives. If you want to add an int, it is going to have to be wrapped as an Integer. Katrina
Originally posted by Katrina Owen: at which point autoboxing would take place when you added something like this:
someMap.put( 5, "Super Duper Example" );
Originally posted by Katrina Owen: Yes, autoboxing does occur in the first example. Hope I haven't confused things! (I'm pretty new at this, and try my hand at answering when I think I might help. Sometimes it doesn't work so well ) | http://www.coderanch.com/t/406824/java/java/Generics-death | CC-MAIN-2014-41 | refinedweb | 278 | 67.04 |
Just a quick question -- I'm probably overlooking something here.
The below method outputs the first 2 odd numbers correctly: [1,3]
If I'm not mistaken, shouldn't I want the length of the array to eventually equal n? As I understand it, the length of the outputted array [1,3] is 2, which also represent the first n-many odds: 2.
As such, the comparison in line 6 would now be <= rather than <
However, if I do that, first_n_odds(2) would now equal [1,3,5], which gives me the first three odds. What's going on here?
Thanks!
def first_n_odds(n)
array = []
current_number = 0
while array.length < n
if current_number % 2 == 1
array << current_number
end
current_number += 1
end
return array
end
puts first_n_odds(2) # output is [1,3]
Let's do your example with
n == 2.
Iteration 1:
array.length == 0.
Iteration 2:
array.length == 1.
Both of these values are
< 2. Now if you change
< to
<=, you'd have a 3rd iteration where
array.length == 2 since your check happens before adding the new element to the array.
Since you seem to be fairly new to Ruby, here are some ways to define the method in a more idiomatic way:
# Mapping over a range def first_n_odds_1(n) (0...n).map { |x| x * 2 + 1 } end # Mapping over an Enumerator def first_n_odds_2(n) n.times.map { |x| x * 2 + 1} end # Using Numeric#step + Enumerable#take def first_n_odds_3(n) 1.step(Float::INFINITY, 2).take(n) end # A more explicit version of the previous method def first_n_oods_4(n) 1.step(by: 2, to: Float::INFINITY).take(n) end | https://codedump.io/share/8Gk0ZmJdt0ii/1/method-that-returns-the-first-n-odd-numbers | CC-MAIN-2018-05 | refinedweb | 271 | 66.13 |
You can create a free account with us by accessing the signup page.
To create and run A/B tests, sign in to the VWO dashboard and then select Mobile App A/B on the menu. If you are using the VWO A/B testing feature for the first time, click Start Mobile App A/B Testing to begin.
To create A/B tests for mobile apps:
Add the mobile app to be tested.
Define the variables you want to test.
Create A/B tests
Create an App
Registering your app on VWO is a one-time process.
Adding your app generates an
Api Key, which is used by VWO servers to recognize your app.
Select the Mobile App A/B option under the test menu.
Click Create, and then click Add App. Write a name for your app, and in the Platform option, select Android.
Note the api key generated by the system.
Add the mobile app to be tested
To add a new app for A/B testing, go to the Apps section on the page.
On the right side of the screen, click Create App.
Type the name of the app you want to add, and then click Create.
As you add an app, VWO generates API Keys for both the iOS and Android platforms. You can make a note of the API Key under the Settings section and are used during app initialization.
Defining the Variables You Want To Test
Test variables are elements or parameters of your mobile app. After you define a variable, you can run an unlimited number of A/B tests on the variable, without doing any code changes or redeployment. For example, you can create a string-type variable for testing different text versions in the app screen.
Under the Apps tab, select the mobile app for which you want to create test variables.
To add an element for testing, under Variables section, click Create Variable.
Assign a name to the variable, and then select its data type.
Type Default Value (current value or a value if there is no A/B test).
To add the variable, click Create. You can add multiple variables to an app.
Creating A/B Tests for Mobile Apps
On the Mobile App A/B testing screen, go to the Campaigns tab, and then click Create.
Choose the App you want to test. All mobile apps you have added to VWO are listed here.
Select a platform where the app is running.
Enter a unique identifier in the Define a test key field to filter your tests easily. The test key helps you execute custom logic, as explained in this iOS Code Blocks/Android Code Blocks section.
Select Next and click Add Variable. All the variables you have created for the test are displayed here. You can choose to Create Variable by adding a new test variable.
Select the variable you want to test, and then enter the variation values. You can test multiple variables in one test. In the example above, we have added speed variable, defined value as 20 for the variation. For control, the value is 10, which is the default value for the variable.
Based on the test key and variation names, VWO generates the code snippet that you can use in the mobile app.
To continue, click Next
Define Goals
In the next step, define at least one goal. The Goal is a conversion matrix that you may want to optimize.
To edit the goal name, click the corresponding Goal icon. In the box below the Goal* icon, select the drop-down menu to select an event that you want to track. Provide the relevant information in the Goal Identifier text box.
To define more goals, select Add Another Goal or select Next.
conversionGoal
Finalize
In the Finalize step, we need to specify the campaign name. Next, we can set the percentage of users that we want to include in the campaign.
Under Integrate With Third-Party Products, select the box corresponding to the product name with which you want to integrate the app.
Under Advanced Options, you can also target the campaign for specific user types, enable scheduling, customize traffic allocation for each variation, or make the user part of the campaign on app launch.
For quick setup, we can leave those settings to default.
Click Finish.
On the next screen, click Start Now to run the campaign.
Installing Library
Library can be installed through
npm
$ npm install --save vwo-react-native
iOS
- Add
pod 'VWO'Podfile file present in ios directory.
cd ios && pod install
- Open Xcode workspace, drag all the files from
node_modules/vwo-react-native/iOSto your project.
Android
- Link the vwo-react-native library.
$ react-native link vwo-react-native
- Add this to your
android/build.gradlefile.
allprojects { repositories { ... mavenCentral() ... } }
Manual installation
- Open
android/app/src/main/java/[...]/MainActivity.java
- Add
import com.vwo.VWOReactNativePackage;to the imports at the top of the file.
- Add
new VWOReactNativePackage()to the list returned by the
getPackages()method.
- Append the following lines to
android/settings.gradle:
include ':vwo-react-native' project(':vwo-react-native').projectDir = new File(rootProject.projectDir, '../node_modules/vwo-react-native/android')
- Insert the following lines inside the dependencies block in
android/app/build.gradle:
compile project(':vwo-react-native')
- Add this in your
android/build.gradlefile
allprojects { repositories { ... mavenCentral() ... } }
Enable the preview mode
Preview is by default enabled for debug build. To enable the preview mode in the release build. Shake your device 3-4 times.
Disable the preview mode
Preview mode can be disabled by setting the
disablePreview flag to true in your
config object.
var config = { disablePreview: true }
Code changes
Throughout our SDK in all callbacks, we use Node's convention to make the first parameter an error object (usually null when there is no error) and the rest are the results of the function.
1. Initialising the SDK
After installing the library, you would want to initialize it.
Import the Library as follows:
import VWO from 'vwo-react-native';
Library can be initialized in the following ways:
I. Launching VWO SDK
var config = { optOut: false, disablePreview: false, customVariables: {}} VWO.launch('YOUR_API_KEY', config).then(() => { console.log("Launch success " + key); });
Launch configuration
You can pass a
config object during the launch of the VWO SDK.
Config is a javascript object which can have following keys:
optOut: it can have a boolean value which tells the VWO SDK whether to initialize the SDK or not. It defaults to false.
disablePreview: Boolean value to turn on or off the preview mode. It defaults to false.
customVariables: Takes in a javascript object as its value. Check Targeting Visitor Groups / Targeting Visitor Groups for more details. It defaults to an empty object.
customDimensionKey: String value which is the unique key associated with a particular custom dimension made in the VWO application. Check Push Custom Dimension for more details. It defaults to an empty String.
customDimensionValue: String value which is the value you want to tag a custom dimension with. Check Push Custom Dimension for more details. It defaults to an empty String.
If you do not wish to pass any
config object, you can pass a
null.
var config = { optOut: false, disablePreview: true, customVariables: { user_type: "free" }, customDimensionKey: "CUSTOM_DIMENSION_KEY", customDimensionValue: "CUSTOM_DIMENSION_VALUE" }
You can set this config object as follows:
VWO.launch('YOUR_API_KEY', config).then(() => { console.log("Launch success " + key); });
2. Using campaign
To use the variation defined during campaign creation, use an of the following function to get the value for the campaign keys.
VWO.objectForKey(key, DEFAULT_VALUE).then((result) => { // Your Code here }); VWO.intForKey("key", 1).then((result) => { // Your code here }); VWO.stringForKey("key", "default_value").then((result) => { // Your code here }); VWO.floatForKey("key", 0.0).then((result) => { // Your code here }); VWO.boolForKey("key", false).then((result) => { // Your code here });
When these methods are invoked, the SDK checks if the targeting conditions hold true for the current user.
If targeting/segmentation conditions hold true, the user is made part of the campaign and visitor counts in the report section increments by one (once per user).
Test can also be created without variables. Campaign test key can be used to fetch the variation name. This variation name can be used to execute custom logic.
VWO.variationNameForTestKey("campaign_key").then((variationName) => { if (variationName == "Control") { // Control code } else if (variationName == "Variation-1") { // Variation 1 code } else { // Default code } });
NOTE
Can be called only after SDK initialisation. Otherwise, a null value is returned.
3. Triggering goals
We would track the effect of this campaign on our conversion metric.
Earlier we defined
conversionGoal as a goal.
We need to tell the VWO SDK when this conversion happens. Use the following code to trigger this goal.
var goal = "conversionGoal"; VWO.trackConversion(goal);
For triggering revenue goal use method
trackConversionWithValue(goal, value).
var goal = "conversionGoal"; VWO.trackConversionWithValue(goal, 133.25);
NOTE
Can be called only after SDK initialisation. Otherwise, the goal is not marked.
4. Push Custom Dimension
Pushes a custom dimension for a particular user to the VWO server. It is used for post-segmenting the data in the campaign reports.
Read here on how to create custom dimension in VWO
The API method accepts a custom dimension key - customDimensionKey and custom dimension value - customDimensionValue.
customDimensionKey is the unique key associated with a particular custom dimension made in the VWO application.
customDimensionValue is the value you want to tag a custom dimension with.
VWO.pushCustomDimension("CUSTOM_DIMENSION_KEY", "CUSTOM_DIMENSION_VALUE");
Logging
To enable logging in SDK, use
VWO.setLogLevel(level).
You can set different log levels depending upon the priority of logging as follows:
- logLevelDebug: Gives detailed logs.
- logLevelInfo: Informational logs
- logLevelWarning: Warning is a message level indicating a potential problem.
- logLevelError: Indicates Error
- logLevelOff: No logs are printed
The different methods set the log level of the message. VWO will only print messages with a log level that is greater to or equal to it's current log level setting. So a logger with a level of Warning will only output log messages with a level of Warning, or Error.
VWO.setLogLevel(VWO.logLevelDebug);
See Android Logging for verbose logging in Android SDK.
Opt out
To opt out of tracking by VWO, use
config object to set OptOut to true or false. This
config object is passed when
VWO.launch function is called.
var config = {optOut: false} VWO.launch('YOUR_API_KEY', config).then(() => { console.log("Launch success " + key); });
Reports
From the Mobile App A/B menu, select your campaign and click Detailed Reports to see reports of your campaign.
Source Code
VWO React-Native Library code is available on GitHub:
Next Steps
As a next step, take a look at:
Detailed iOS documentation: SDK Reference
Detailed Android documentation: SDK Reference
We would look forward to hear from you about any question or feedback at [email protected]. | https://developers.vwo.com/reference/react-native-guide | CC-MAIN-2022-21 | refinedweb | 1,796 | 58.38 |
Application Development
Introducing new Cloud Source Repositories
At Google, we spend a lot of time each day working with code. As Google has grown and the code base has increased in complexity, Google engineers have built a set of code tools to help our developers stay happy and productive each day. One particularly essential tool is code search, which is well-loved by Google engineers, and used by most engineers here multiple times a day to improve their productivity.
We’re pleased to bring code search to you with the newly revamped Cloud Source Repositories in beta availability. It features an entirely new user interface and semantic code search capabilities. Cloud Source Repositories is powered by the same underlying code search infrastructure that Google engineers perform their code searches on every day.. Using this code search can improve developer productivity, whether you host your code in Cloud Source Repositories or mirror your code from the cloud versions of GitHub or Bitbucket.
Speeding up code search for better development
As developers ourselves, we know how frequently we need to search code. One developer case study found that programmers conduct an average of five search sessions with 12 total queries each workday.
These search queries are often targeted at a particular code location, and programmers are typically looking for code with which they are somewhat familiar. Programmers are generally seeking answers to questions about:
what code does,
where is code instantiated,
why code is behaving a certain way,
who was responsible for an edit and when it happened,
and how to perform a task.
There are some common code search challenges we’ve encountered. Here are a few of those, along with how Cloud Source Repositories handles them.
You want to search across all the code at your company, but there are a lot of repositories with only a few stored locally—which aren’t up-to-date with the versions on the server. Storing code locally isn’t a great option, and your computer’s search tools aren’t very powerful. When you’re using Cloud Source Repositories, the default branches of all repositories are always indexed and up-to-date. It’s simpler and faster to search across all the code you have access to for a particular file/class/function, rather than hunting for the code you have stored locally.
You’re looking for code that performs a common operation that is used by other people in the company. With Cloud Source Repositories, you can do a quick search and find that code. By discovering and using the existing solution rather than reinventing a new solution, you’ll save time, avoid introducing bugs and keep a healthier code base by not adding unnecessary code that has to be maintained over time.
You don’t remember the right way to use a common code component like an event handler. With Cloud Source Repositories, you can enter a query and search across all of your company’s code for examples of how that event handler has been used successfully by other developers. You can write the code correctly the first time.
You discover an issue with your production application. It reports a specific error message to the server logs that reads “User ID 2503295 not found in PaymentDatabase.” You can perform a regular expression search for “User ID .* not found in PaymentDatabase” and instantly find the location in the code where this error was triggered. Then you can get a fix deployed to production for users. Rich regular expression matching allows an easy way to find usages, definitions and sample code, and helps for refactoring.
Search across all your code
Bringing Google’s code search capabilities to Cloud Source Repositories means you’ll get the benefits of Google Search technology for your code base searches.
One key benefit is that now all owned repositories that are either mirrored or added to Cloud Source Repositories can be searched in a single query.. Cloud Source Repositories respects all identity and access management (IAM) permissions, so users won’t see any code in search that they shouldn’t have access to, and there are no additional permissions to set up.
In addition, Cloud Source Repositories can search across thousands of different repositories with a single query. You can search for either files or code within files in the default branch of your repository. Cloud Source Repositories also has a semantic understanding of the code, which means that the search index identifies which parts of your code are entities such as classes, functions, and fields. Since the search index has classified these entities, your queries can include filters to limit the search to classes or functions. It also allows for improved search relevance by ranking important parts of code like classes, functions, and fields higher. By default, Cloud Source Repositories allows powerful search patterns using RE2 regular expressions so you can find the answers you need for even very complex questions. (You can work with individual special characters by using a backslash or an entire string by enclosing it in quotation marks.)
As you type in the search box, you’ll get suggestions for matching results (shown below). For Java, JavaScript, Go, C++, Python, TypeScript and Proto files, you’ll see result suggestions indicating whether the match is an entity such as a class, method, enum, or field.
Learn more about searching through your code in Cloud Source Repositories.
How Google code search works
Code search in Cloud Source Repositories uses the same search technologies as Google Search, but optimizes the indexing, algorithms, and result types for searching code. When you submit a query, that query is sent to a root machine and sharded to hundreds of leaf machines. It looks for matches by file names, classes, functions and other symbols, and matches the context and namespace of the symbols. If regular expressions are specified in that query, our code search runs an optimized algorithm to quickly find potential matches for the regular expression. Then, it refines results against the full regular expression to find the actual matches, so the code search can match complex regular expressions very quickly. In addition, Google’s code search looks for relevant snippets of the code around the search to provide additional context for the code match.
If you haven’t used Cloud Source Repositories before, you can try it today at no cost with the Google Cloud Platform (GCP) free trial and our generous free tier. When you navigate to Cloud Source Repositories, you’ll see a signup page that will guide you to add your code. You can create a new empty repository or mirror your code from the cloud version of GitHub or Bitbucket. You can easily populate new empty repositories by pushing code from a local machine or writing new code without leaving the browser using the Cloud Shell editor.
If you have existing code stored in Cloud Source Repositories, you’ll be greeted with a new personalized landing page, shown below, that presents a view of all of the repositories you can access across all of your GCP projects. Over time, your personalized landing page will become populated with the recent code you’ve browsed and areas of the code base you’ve favorited.
Once you begin storing your code in Cloud Source Repositories, it’s easy to take advantage of other services from GCP. Some popular integrations are:
Configuring Cloud Build to automatically deploy a new build to App Engine when a commit lands in a branch (View Quickstart)
Using version control for Cloud Functions (View Quickstart)
Inspecting the state of your applications stored in Cloud Source Repositories in real time using Stackdriver Debugger (View Quickstart)
Publishing commit events to Cloud Pub/Sub to integrate with any third-party tool of your choice (View Quickstart) | https://cloud.google.com/blog/products/application-development/introducing-new-cloud-source-repositories | CC-MAIN-2019-09 | refinedweb | 1,302 | 57 |
Is there a way to create a sub-component?Edilmar Alves Dec 17, 2009 9:32 AM
Hi,
I have used RichFaces 3.3.2 to create my xhtml pages.
Sometimes I have to create many rich:calendar with specific inputSize/datePattern/etc,
many outputText with specific convertDateTime/convertNumber/etc.
Then, I would like to know if it is possible to create, for example, a component
named "mycalendar", "myOutputDate" or "myOutputNumber", with some standard
values for many properties, but these comps must be inherited from default JSF
or RichFaces components. I don't want to implement complete new comps, only
to setup "default values" for actual comps. Is this possible easily?
I didn't find on internet articles to setup default values, only to create complete
new components...
Thanks in advance,
1. Re: Is there a way to create a sub-component?Ilya Shaikovsky Dec 17, 2009 9:46 AM (in response to Edilmar Alves)please explore facelets documentation. custom facelet creation - should be perfect solution for your case.
2. Re: Is there a way to create a sub-component?Rafael de Carvalho Dec 17, 2009 10:36 AM (in response to Edilmar Alves)
If I got it correct, I would suggest you to use Facelets capabilities to create custom tags.
You can define your own taglib and custom tags without any giant effort. It gets a pretty clean and effective code.
Take a look at the 3.5.5 item at this document:
I guess that you want something like it.
Hope that helps.
3. Re: Is there a way to create a sub-component?Rafael de Carvalho Dec 17, 2009 10:27 AM (in response to Rafael de Carvalho)
Sorry for replying the same thing.
I wrote the answer then get busy and forgot to submit the message and just do it now.
4. Re: Is there a way to create a sub-component?Ilya Shaikovsky Dec 17, 2009 10:33 AM (in response to Rafael de Carvalho)no.. thats cool - clarification with link which I forgot
5. Re: Is there a way to create a sub-component?Edilmar Alves Dec 17, 2009 1:57 PM (in response to Ilya Shaikovsky)
Hi friends,
I look at Facelets docs "3.5.6 Custom Tags" and "7. Extending Facelets".
But I don't know if this is the way to solve my doubt.
I wouldn't like to create Java code for a new component.
Suppose I have the following rich:calendar:
<rich:calendar
Then, I would like to create in my templates a child from rich:calendar like this:
<MYcalendar value="#{consCliente.dataIni}" id="dataIni" />
where all other properties (zindex,inputSize,datePattern,etc) are default to values above.
=====================================
Or, other example, if I have these outputText's:
<h:outputText<f:convertDateTime</h:outputText> <h:outputText<f:convertNumber</h:outputText>
I would like to create like this:
<h:MYoutputDate <h:MyoutputNum
and default values for converting date/number are used.
Is the only way to do this to create new components in Java using Facelets documentation,
and then inherited from rich:calendar or outputText? Or is there some XML JSF/Facelets
commands to create a "templating component from default component"?
Thanks in advance,
6. Re: Is there a way to create a sub-component?Ilya Shaikovsky Dec 17, 2009 2:06 PM (in response to Edilmar Alves)
section 3.5.5 is what you really need.
In a few words to create such facelet from template you should:
- create template itself
- create taglib and register tag which uses source template.
- add your taglib namespace to the page and add the new tag itself passing defined params.
7. Re: Is there a way to create a sub-component?Rafael de Carvalho Dec 17, 2009 2:11 PM (in response to Edilmar Alves)
Edilmar,
looks like that you're brazilian, I guess.
If you are, just take a look at this:
For who don't know portuguese, just looking to the code arrangements give an idea of how it works.
8. Re: Is there a way to create a sub-component?Edilmar Alves Dec 17, 2009 4:02 PM (in response to Rafael de Carvalho)
Hi friend, I'm brazilian too...
I tried to follow the code from JavaMagazine, but now the component disappear.
Look at these steps:
1) Create SubMacroFaces.taglib.xml file into WEB-INF subdir:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE facelet-taglib PUBLIC "-//Sun Microsystems, Inc.//DTD Facelet Taglib 1.0//EN" "facelet-taglib_1_0.dtd"> <facelet-taglib> <namespace></namespace> <tag> <tag-name>smdate</tag-name> <source>smdate.xhtml</source> </tag> </facelet-taglib>
2) Configure web.xml:
<context-param> <param-name>Facelets.LIBRARIES</param-name> <param-value>/WEB-INF/SubMacroFaces.taglib.xml</param-value> </context-param>
3) Create smdate.xhtml file into WEB-INF subdir:
<?xml version="1.0" encoding="UTF-8"?> <html xmlns="" xmlns: <ui:composition> <rich:calendar </ui:composition> </html>
4) Use the new component sm:smdate into a .xhtml file... two configs:
4.1) Header of the .xhtml file:
<?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE composition PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <ui:composition
4.2) Using the component...
<sm:smdate
The rich:calendar disappeared. The value uses "consViagensCliente" managed bean and his "dataIni" variable.
9. Re: Is there a way to create a sub-component?Ilya Shaikovsky Dec 18, 2009 6:10 AM (in response to Edilmar Alves)if the component not appears - check generated html. if you will be able to see something like <sm:smdate there - problems with configuration.
10. Re: Is there a way to create a sub-component?Rafael de Carvalho Dec 18, 2009 10:06 AM (in response to Ilya Shaikovsky)
That's a usual problem that appears in all project starts
Try reconfiguring facelets, pay attention to the paths, mainly to the declared paths of the custom components that you've implemented. | https://developer.jboss.org/thread/145956 | CC-MAIN-2018-47 | refinedweb | 979 | 59.19 |
Writing code that any programmer who read can understand is a must-have skill for software developers. The fact is: only 20% of the programmers have the ability.
“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” — Martin Fowler
When I started caring about code readability I noticed that my code started to be:
Robert "Uncle Bob" Martin's "Clean Code: A Handbook of Agile Software Craftsmanship" is the clean coder programmer bible. This book talks about code, behaviour, automated tests and so on.
One of Clean Code chapters talks about meaningful naming. In this story, you are going to be the code reader. Take a look at this function:
def calc(n1, n2)
return n1 / n2
end
Do you think "calc" is a good name for this function? Uncle Bob would say: no! Why?
This function divides two numbers. "divide" is a good name for it.
def divide(n1, n2)
return n1 / n2
end
result = divide(1, 2)
We still have problems with it. "n1" and "n2", the parameters, are not semantic. What if we call them "dividend" and "divisor"? The same thing to the "result" variable. It should be called something like "quotient".
def divide(dividend, divisor)
return dividend / divisor
end
quotient = divide(1, 2)
Much more semantic!
If you are not convinced to read this book yet, take a look in this picture and buy this must-read book!
Thank you for reading! Don't forget to follow me on Medium, Instagram and LinkedIn.
Create your free account to unlock your custom reading experience. | https://hackernoon.com/the-book-every-programmer-should-read-33b5ef2e532a | CC-MAIN-2021-04 | refinedweb | 266 | 76.52 |
Really people, I hate to say this, but when are you going to get your priorities straight?
Bryce 7 has been out for how long, and it still isn’t working on Lion, but yet you have enough time, money and effort to shuck probably a few thousand dollars into re-developing your front end and back end of your web server?
I’m sorry… but this is a problem… I’ve forced myself to learn new software instead of using the try and true Bryce… The learning curve was massive, and I still can’t do everything Bryce allowed me to do…
Yes people here are going to be apologists, and I don’t care what they say, especially since they say Bryce isn’t in the cycle of development yet. I am calling BS on this, because the next version to be in development if it is at all is version 8… so why even come out with Bryce 7 if it never was going to work with an OS that has been out for almost a year?
So what is the reason really?
It’s quite simple… they don’t have enough people to concentrate on special issues within the company’s development and deployment of its products. Take Bryce for example… since Bryce 5.5, (which is the last stable version because Corel created it, not Daz), Bryce is constantly been passing adding new features and upping the version numbers before the bugs have been worked out. In short, Bryce is and has been a beta product release while being passed off as a full stabilized version. How many projects have I lost due to program crashes? Why do you think Daz now offers products for free that I have already paid for (Bryce 7), yet not is it only now offered free, until a short while ago Daz Bryce 7 PRO was free which I haven’t purchased. I just can’t help feeling that I’ve been ripped off. But, by offering it as free is an acknowledgement it is being offered for what it is worth- a beta version not worthy of value. But there is hope.
In the short term, do you know what would alleviate that problen? Corel Video Studio Pro has a ten-minute auto-save option, where the program automatically saves your current project every 10 mionutes. Just create the file, add an object to it and save under your project name. Then, delete your object and begin working on your scene, importing chaarecters, whatever. In ten minutes the projects and all its changes will be saved, and every ten minutes your progress will be automatically preserved without a crash wiping it all out.
In the long term, use product updates rather than passing off more than five improvements as an entirely new version. the only diffeeence I see between Bryce 6&7 is the expanded lighting options (dome lighting, fill light, etc) and the and the ability to import mainstream HDRI instead of just probe HDRI’s thast you had to create in HDRI Shop. I’m not touching the content managent fiasco… they should have a team hunting out and retrofitting Metedata to all existing products developed over the last ten years. Or at least creating a seperate metedata thread where product creators can post metadata updates to their own products. Anything pre-genesis, even the original V4, doesn’t have metedata, nor the clothes created for it even by Daz (bikini, boy shorts, etc)..
I haven’t bought anything since the new storefront opened but it’s another example of how Daz is multitasking existing staff instead of bringing outside experts in to debug the store. The complaints have been deafenening and I’m afraid to purchase anything in the new store. The idea of offering a “sorry we destroyed the front-end store’ sale is almost unprecedented. That should have gone through beta testing before being forced on us before it was stabilized and tested. What was wrong with the old storefront anyway? Stop trying to ‘fix’ things that already work fine, and concentrate on what isn’t- like Bryce itself.
DAZ is slow in recognizing it has grown beyond the capacity to have volunteers from within the Daz community doing the heavy lifting, especially in areas they themselves have limited knowledge in. Software debugging, backward compatibility and marketing are diverse and specialized issues that can no longer become adequately handled by volunteers. I’m sure people like Horo have given up their hobbies and family life due to the overwhelming issues he must answer for. It has to be a full time job in itself.
And I’m just talking about Bryce… God knows what is happening with Hex and Carerra, I’m actually afraid to use them. because of the issues I’ve experienced in Bryce. How can you learn a program that keeps crashing before you can learn how to use it?
By the way, when is a 64 bit version of Bryce coming out? I bought a specialized PC to handle effectively files twice as large as my current one that accepts only 3 or 4 charecters at a time importeed into a Bryce scane. It has been a waste of money because while Studio may have 64 bit versions, they aren;t really needed- I import my charecters to the infinitely superior Bryce environment, and I may as well have saved my money- Bryce is a 32 bit product, and half of the 6 gig of memory is being wasted because Bryce can’t access it. I strongly suggest youy update the priocessing capacity of Bryce to its full potential before even thinking a Bryce 8 should be offered.
My apologies for ranting, but I know I speak articulately for those who feel Bryce is underdeveloped, and running on yesterday’s technoology. What sophisticated art application hasn’t graduated to 64 bit? None that I kbnow of. Except Bryce.
I don’t comment your post, brycescaper. It’s your oppinion and this is a free world (well, mostly). Just two items: Metacreations were working on Bryce 5.0 when Corel bought it from them. Bryce 5.0 had issues but they were resolved within a few days and Bryce 5.01 was indeed very stable. Bryce 5.5 was made by DAZ and was stable as well.
Bryce 6.0 was not very good and DAZ brought in a former key developer from Metacreations and he fixed a couple of things. Bryce 6.1 is quite stable on the PC. There were issues on the Mac, these were resolved with 6.3. 7.0 was again not very good but 7.1 is quite good - at least for the PC. For the Mac, this is another matter. “Good” options to crash 7.1 are displacement and Instances, though you can make them work - nevertheless, both features are not mature at all. The rest works fine (there are rumors about animation, but this is a black spot for me).
Since Bryce 6, the file saved is compressed and this compression happens in memory - decompression after loading a file as well. This was not a very wise decision IMO because we have that 2 GB memory limit of a 32-bit application. Bleeding things out on the HD would make things slower, but would at least work. Luckily, we can make Bryce large address aware and it can use up to nearly 3.5 GB.
64-bit application - that was indeed the goal for 7. However, the task proved to be too labour intensive and - at the time - out of the time frame and most probably also beyond budget. You see, a lot of code is written in Axiom and this has to be rewritten from scratch. Unfortunately, it is not a matter of just compiling for 64-bit.
** [ Stuff by David Brinnen and myself ] ** [ My DAZ 3D Gallery ] ** [ My Website ] ** OPC 4565 **
I understand this is my opinion, but I haven’t used or noticed these other improvements you spoke of (primary because I haven’t noticed them- I use Bryce as a backdrop for imported Daz figures).. There isn’t a need to defend Bryce’s improvements by making a laundry list of them. What you are missing is that whatever the intentions, Daz has changed int a totally different company than when it started (I still have Daz 1.3 on my system)and the volunteer-hobby approach cannot be relied on completely as the software becomes more and more complex and more and more products are offered. The people who developed the 10-min autosave should be brought in as consultants so that usful feature can be used in Bryce. Then the crashing wouldn’t be a problem because how many major changes one makes in ten minutes casn’t be redone (especially as you did them ten minutes before. The it can crash as much as it wants to with minimal damage. I’m not a programmer, so I understand why Bryce has to be re-written to accomodate 64 bit.
I sounded critical, but it was more frustrated. I felt I had to explain to the original poster that Daz is going through growing pains, and that Bryce is the most user friendly and photorealistic application especially when interfaced with the Daz charecters. I am evangelistic about Bryce and its possibilities, I wasn’t understanding why Daz hasn’t kept pace and be the leader in the market, but that has been answered as well.If you can’t afford it, you make the best of what you have. I personally plead you to consider the auto-save option- I get so wrapped up in creating my scenes that I have lost entire projects because I didn’t stop and manually save the project. I expect Bryce to keep working until I reach a stopping point when I remember to save. A restore pount would be nice to have, not from the last saved version when the project was waved last days before, but minutes before. .
Daz has been going through growing pains since they took over Bryce then… that is not an excuse that is plausible, that’s an excuse because we don’t know the answer. Especially when none of their software works on my mac in reality.
You know, i am learning Blender because Daz can no longer be expected to make software that works…lets be honest, if they cared about the OS that gave them the funds to now ignore one of the largest growing computer markets out there, then perhaps it’s time to hang up the DAZ appreciation hat, and put on one of pure skepticism because Daz has shown nothing in the past two years they they care about the Mac platform, or Bryce… which is a shame.
Also, why in the hell do I have to know HTML to post correctly on this form? Whoever designed this POS we call a forum now needs to go back to school and learn real web design… because I know a dozen people that could of done this, and not have the required need of knowing HTML to post correctly here.
Just my two cents plus change.
I am not quite certain th what you are referring when you say you need HTML in order to post sorrectly
My knowledge of HTML could be written several times on the back of a postage stamp, and there would still be space left
I admit quite honestly I had a “cheat sheet” that I used when doing certain things that mods do on the old forum.
As to THe mac and Bryce problem, there is one post somewhere in this forum by a member of the DAZ_Management Team that does state that as soon as Bryce is taken back into the dev cycle, then this issues has top priority, and may in fact be the only thing that is updated.
Chohole’s Space NeilV’s Freebies and stuff Winter Bryce Rendering Challenge December Freebie Challenge
My DAZ 3D Gallery 11915
When you need to have < br > (close together) or < b > to bold or break a paragraph, that is HTML code.
PHP code is [ b ], and it knows what to do when a paragraph is made…
This is 2012, the year of smart phones, etc… stuff like this should not be required. I know more about HTML than a lot of people out there, I have created spam blocking scripts, and scripts that can identify your IP, your web browser, your OS, and a bunch of other things. I should not have to type anything to make a paragraph break, but yet I do here…
That being said, the DAZ_Management_Team has been saying that for over a year… time to fess up that the program isn’t on their docket for the perceivable future, and that those like me, who actually do make a living off of some 3D side work…are SOL.
Sorry, that line worked on me for a few months, but you know what, it’s like politicians, and diapers… they both need to be changed frequently, and most of the time for the same reason. DAZ is starting to fall into that category… I use to buy their stuff without a second thought, now… I won’t touch it with a 10 foot pole till they solve this issue.
I actually use return space return works every time, without any fancy html.
I use 3 times return. Faster and works as well. In fact, it has never occured to me to use HTML tags here.
chohole - 03 July 2012 06:46 AMI actually use return space return works every time, without any fancy html.
Perhaps for bold, but it’s a moot point.
Horo - 03 July 2012 07:48 AMI use 3 times return. Faster and works as well. In fact, it has never occured to me to use HTML tags here.
So you are happy pressing return three times to get a single space between comments/paragraphs? Glad you are, I’m not, that’s poor practice.
No for bold I use the format tools at the top. for paragraph breaks I use enter space enter.
chohole - 03 July 2012 09:14 AMNo for bold I use the format tools at the top. for paragraph breaks I use enter space enter.
I used the same thing, and my paragraphs run together… not exactly the way I want to do thing.
These are two paragraphs. Separated by a hard return in between to make a space, doesn’t look it does it?
If I use the < br >, like I did before this paragraph it separates, sorry, that’s HTML… instead of spending time on the forum and front end they should of spend time and money on the software… and making it actually work than this forum, and showing me I can have a DAZ-a-Holic logo attached to my account… which honestly right now i wouldn’t be caught dead with because anyone in the 3D world knows DAZ doesn’t work.
Return (or enter
space
return (or enter)
Like that
chohole - 03 July 2012 09:30 AMReturn (or enter
space
return (or enter)
Like that
That is called a hack, or tweak… that should not be required. | http://www.daz3d.com/forums/viewthread/3243/ | CC-MAIN-2014-52 | refinedweb | 2,571 | 66.88 |
This is a mirrored post from ClimateAudit.org which is terribly overloaded.. (Note: Try not to click this link now, CA is overloaded. Can’t even get to it myself to mirror it. -A)”).
TGIF-magazine has already asked.
Sponsored IT training links:
Learn all that you need to pass 220-701 exam. Complete your certification in days using 70-642 dumps and 220-702 study guide.
312 thoughts on “Mike’s Nature Trick”
Read the excuse for this on Realclimate.com:
“Scientists often use the term “trick” to refer to a “a good way to deal with a problem”, rather than something that is “secret”, and so there is nothing problematic in this at all”
Hahaha!
By the way: WUWT has 1000 comments on one topic for the first time…
Dr Phil Jones’ 13.7 million British pounds in grants, seen in one of the XLS files,
indicate that Jones’ less sophisticated tricks to hide the decline, relatively to Mann’s relatively refined ones, may be sufficient to fool the science-limited British taxpayer. ;-)
Take no prisoners.
Go go go.?
Excellent work.
Awesome
I would be interested to hear some informed opinion comparing/contrasting this piece to RC’s explanation for “hide the decline”. They seem pretty consistent to me.
If Jones had written “address the divergence problem” instead of “hide the decline” would we be talking about that email at all?
Not that I’m entirely comfortable with “the divergence problem”. I’m not. To me it is evidence that the dendro’s specialty is not ready-for-primetime to be relying on for the kind of grave responsibilities it is being used for currently. Not that they shouldn’t keep working on it. Not that, perhaps, it won’t eventually get there (and perhaps not), but that it isn’t there yet.
Still, if Jones had written “address the divergence problem” instead –a well known non-controversial (in the sense everyone knows it is there) issue, would we be having this conversation about that particular email?
WOW – this has totally blown off the lid off. We have evidence of premeditated deceit and then outright denial when caught red handed.
Dr. Jones et Al. it is time to do the honorable thing when caught red handed as a cheats and a liars. PLEASE RESIGN before you are all FIRED.
As delicious as this all is, and as much as I agree that is shows these climate “scientists” to total a$$es, I’m not at all sure it represents the smoking gun so many seem to think.
But what clowns they are.
Keep it up!!! Holy cow, they are on the ropes now. It won’t be long before the ref stops this fight!
As Hunter said, go go go!!!!
Re “trick”, that was addressed in the other thread by many people –just google “tips and tricks” (with the quotes) and look at the responses you get and their nature. I got “about 12,000,000” when I just did it.
there are going to have a hard job burying this one but they are going to try very very hard
Hopefully some people will stand up to in the science industry for bringing there profession into disrepute
I agree with RC… “Trick” does NOT mean they were trying to deceive anyone. No way. Nothing problematic here. Instead, “trick” means selling their souls for power, prestige and money…as in the prostitution sense. Duh!
The emails are now searchable online here:
I typed in “skeptics” and the first email I read was allegedly from renowned climatologist Tom Wigley written in 1997 castigating his fellow scientists for misrepresenting the science to influence Kyoto. Here is the link:
And here are a few paragraphs from the email:
.”
The amount of Kharma in having CRU, the great secret host of data that cannot and will not be released to peons who just want to find problems now having all that data andf then some exposed is massive.
I hope the irony is not lost on Jones, Mann et al. Maybe the write a new paper, get it peer reviewed and then have it published.
Posted on Richard’s site at the BBC.
While I can’t condone hacking, it’s great to see this information out in the open.
To those defending the ‘trick’:
The problem is not that he called his technique a ‘trick’.
The problem is that the intent of his technique is to fool people into thinking something is going on that is not, in fact, going on.
Mike’s naturtrick:
How specious of you to point this out. ;P If no one can get to CA, why is it so busy?!
Read the excuse for this on Realclimate.com:
“Scientists often use the term “trick” to refer to a “a good way to deal with a problem”, rather than something that is “secret”, and so there is nothing problematic in this at all”
Hahaha!
I personally use the word ‘murder’ to mean ‘wonder if that person would be nicer if they owned a puppy’.
Any future confessions or other statements by me should be read with this in mind.
RC: “Scientists often use the term “trick” to refer to a “a good way to deal with a problem”, rather than something that is “secret”, and so there is nothing problematic in this at all.”
A GOOD WAY TO DEAL WITH A PROBLEM?!?!?!?
The problem happened to be a downward trend in their anaysis.
They didn’t want that. So they found a way to cover it up or “hide it.”
Argue over nouns and adjectives later…this is FRAUD!
–If Jones had written “address the divergence problem” instead of “hide the decline” would we be talking about that email at all?–
The problem is the way they addressed the divergence problem was to simply hide the decline with plaster and lath.
“Hide the decline” is much closer to the truth than “address the divergence problem” as the former denotes a knowingly improper method whereas the latter is more closely associated with, you know, science. That’s probably why he used the term he did.
Luboš Motl (14:12:30). Hear-hear!
I think Jones really believes that grafting practise is acceptable. That these trees were great thermometers right up until 1960/61. As Tim Osborn said to Michael Mann, “we usually stop the series in 1960 because of the recent non-temperature signal that is superimposed on the tree-ring data that we use.”
I’ve tended to think of these handful of trees (out of hundreds and thousands of samples and probably hundreds of thousands not yet sampled) as “magic” trees. lucky to be found. Trees that can grow in step with the historic temperature “reconstruction” for 120 years, and become a proxy for centuries, even millennia of past temperatures.
Now I find it’s only 80 years tracking the reconstruction, I just don’t know what to think. Oh wait, I do.
Hide is good in tanning, which is what I hope they get.
I typed in fake and found this little snippet From Phil Jones
“I’m away all next week – with Mike. PaleoENSO meeting in Tahiti – you can’t
turn those sorts of meetings down!”
Indeed if only we could all have those sorts of meetings!
“13.7 million British pounds in grants”
So ask yourself … “If data ‘is what it is’, then why is it so important that the graphics show warming?” then repeat “13.7 million pounds in grants”, then ask yourself “If there were no ‘dramatic’ warming, what would be the consequence for them.” then repeat “13.7 million pounds in grants”.
It bothers me that gyrations must be done in order to show warming when if the warming is real, and there, the data would not need so much “massaging” in order to tease that warming out of it. It is like “well, if you turn it sideways, squint, tilt the paper a little this way, close one eye … see that! Warming! Now get us some money so we can ‘study’ it some more.”
The real problem to me with “the divergence problem” is *they don’t actually know what causes it*. So they know it exists post-1960. But because they don’t know what causes it, they have no idea where else in the dendro record it might also exist to one degree or another pre-instrumental record. No idea at all. Just whistling past the graveyard assuring themselves that shadow didn’t really just move.
Obama, Kerry, Pelosi and the EPAs Lisa Jackson are still fighting for climate change AGW. They will not admit defeat or even think this minor technical detail is but a bump in the road on their agenda. They think they have the votes. Let your elected officials in on the facts.
Epic Win for the Hockey Stick Hacker. American Politicians should be quick to drop Micheal Mann as a credible source of climate validity.
We here at CA are more than pleased to be able to help such nice persons in these matters.
Pretty big talk for a site with absolutely zero scientific accomplishments. Don’t you guys ever get tired of crying wolf to each other?
Okay, so the instrument record wasn’t grafted (in the precise meaning of the term) onto the proxy reconstruction. But if it looks like a duck, and quacks like a duck, don’t blame skeptics for calling it a duck. It hardly qualifies as being a specious claim, as asserted by MM. I will be less kind and call it cooking the books to achieve a desired result. And if the comments from critics are sounding harsh these days, that’s what happens when the data is hidden. It might not be pretty, but the true path of real scientific discovery is made up of many false leads and loads of controversy — it’s not a sing-along love fest of consensual mutual admiration.
Also found this little cracker:
Discussing a paper going public;
“We simply want to do our best to help make sure
that the right message is emphasized.”
that was M MANN saying this
these are very eye opening it shows that they are playing with much more that just science
Might as well assume this as a continuation of the leak story. Looks like Revkin has virtually conceded as well… very interesting as they try to run for cover… LOL
Unless there is an extremely good explanation of how “hide the decline” should be interpreted as anything other than what it sounds like — trying to cook the data to make the recent temperature decline disappear from the chart — this is a damning evidence that should invalidate just about everything Hadley CRU and Mann had published.
The TGIF magazine mentioned is AKA “Investigate”.
IMHO this little magazine deserves kudos. New Zealand based, and always on the money.
Is it me or have the BBC just taken all comments off Richard Blacks site?
I guess I didn’t have it quite right. Instead of supplying interpolated values for the tree ring series, they use “instrumental data”. Guess I’d better just supply a link cause I was spreading the appended temps myth.
Ok, I’ll bite, instrumental?.
Maybe Roger Harrabin would like to tell us who these public servants are and why they are talking to the media. Maybe he would like to show us where they are being taken out of context.
The article is a pathetic apology for wrong doing and Mr Harrabin shows his lack of capability and bias to a cause which has the credibilty of a dead nat.
No mention on the BBC news what a surprise.. As you've said, thedocumentation of what we've done is all in the literature. I think if it hadn't been this issue, the CEI would have dreamt upsomething. …
cry havoc and let slip the dogs of war
geo wrote:
“If Jones had written “address the divergence problem” instead of “hide the decline” would we be talking about that email at all?”
Seems to me that Jones’ email was about making a graphical presentation look the way they wanted it to by employing the “trick” of inserting certain data in the right places. In that context, “hide the decline” would have been exactly what he meant.
“Addressing the divergence problem” would imply a great deal of work and research into exacly why many of the proxies they use do not track with the modern instrumental temp data. That has nothing to do with the above email.
Sorry, geo, no dice.
Remember the Downing Street Memo? Intelligence and facts are being fixed around the policy….?”
——————————–
Even if only half of the items you raise are punishable by criminal sanction (going to the slammer), it explains why the participants in the e-mails are refusing to comment:
“When the Guardian asked Prof Phil Jones at UEA, who features in the correspondence, to verify whether the emails were genuine, he refused to comment.”
.”
Because they have talked to their lawyers and are preparing for criminal prosecution (just kidding…kind of)…
And that’s where they belong…in the slammer!
What’s wrong with grafting data ? They are dealing with dendro data or data from trees. The most common method of propagating trees is which maintains its good characteristics is by grafting the branch into some rubbish but stout base like citrange for citrus. The guys at CRU must be real denro scientist of the widest skills. They have just opened a new branch in horticulture-which is data grafting. As they expand to politics, grafting data could also take another meaning.
It is interesting for MattN to quote as Hunter said “go go go”. If I rememeber correctly it was a Hunter who used to head the IPCC secretariat at around the time of those emails.
IMO it’s worth mentioning how widespread the coverage and discussion of this hack has already become:
If you throw ”CRU email hack jones” at Google, as of right now it comes back with about 338,000 hits.
RC has completely changed tack. They are allowing all comments, which appear to really lambast them (RC) and all the team. I suppose they know they are on the line and its the only thing Gavin can do to possibly regain some respectability in Science. However I do admire his admissions re emails and willingness now to actually allow dissent albeit under extreme duress? I doubt if they will survive though (the site)
Btw, since one (unproven) theory is that warming causes “the divergence problem”, has any of the dendros (or Steve McI, should “somehow” the warmists not find it a worthy exercise) taken the observed post-1960s divergence and applied it to the MWP in the reconstructions covering that period? Could, in fact, this be a reason why the warmists have consistently underestimated the MWP?
Shurley Knot (14:52:51) :
“Pretty big talk for a site with absolutely zero scientific accomplishments.”
That’s not a bug, it’s a feature, considering what “scientific accomplishment” amounts to in this field.
Turning a “trick” is another way to describe prostitution.
Here, the science cops have been paid off in kind for looking
the other way while the transactions continue.
I don’t think Gavin knows any lawyers. Rule #1 – DON’T SAY ANOTHER WORD. Rule #2, Don’t take the stand in defense of yourself or anyone else.
He is putting himself on the stand for a marathon session of defending this fiasco – he better have a lot of coffee cause this is gonna be an all-nighter! Even the believers are taking shots at the level of contemptible discourse in the emails.
“if Jones had written “address the divergence problem” instead of “hide the decline” would we be talking about that email at all?”
Probably not. If something else happened in the past, then things would be different in the future. If something had not smacked Earth and had instead missed, the moon wouldn’t be there at night and we wouldn’t have spent billions getting there. I fail to see the point of your question.
But the fact that he did use the phrase “hide the decline” shows how he thought about the issue and shows his intention. He wasn’t worried about addressing the “divergence problem”, he was interested in “hiding” something. What he wrote at the time says how he felt about it, how he had it framed in his mind. So what he did was rather than having a trend line that was a smoothing of what had already happened, he decided to have a little bit of future data included in the current “trend”. And since the temperature in the future rises after the end of the trend line, this backfilling of future data into the present made it show what they wanted it to show even though the line itself was then meaningless.
What is to prevent them from using that same “trick” with the global temperature data? Lets say I have a missing value for a station. I compute a “fill” value by looking at an average of nearby stations over time on the same day. What if “over time” extended into the future and used what the model says temperatures are “going to be” as a value used to compute that average? You then get into a situation where you have a self-fulfilling prophecy where you model’s future prediction is influencing the present time which tends to make the present temperature more in line with the model’s prediction. You simply adjust today to better fit your prediction of the future. And the more values that go “missing” the better you can fit create the future of your choice by gradually warming up missing values so the overall global temperature is influenced by the model’s future prediction.
At this point I would put nothing past those people. They show little regard for scientific integrity of their output as long as it shows the “correct” result.
Thank you for explaining the “Trick”.
I am just sitting shaking my head and saying to myself: “They are so f***ed!”
Sorry for the language but it is hard to articulate my emotion in any other way.
Ben Santer Apparently!
….”
“trick … to hide the decline”
that is the point. the trick was to hide. this is unnacceptable. it’s dishonest. it’s fraud. there is no place in any profession, scientific or otherwise, for deliberate use of “tricks” to “hide” information which the hider finds undesirable.
can they be prosecuted for misuse of govt funds / fraud? i hope so.
@ LittyKitter:
Apparently, there are legal issues…and until “airtight” controls are in place, no comments.
If I understood it well, smoothing did a down-tail of proxies ending in 80ties, even going more down than raw data showed, so they prolonged the proxy data with temperature data and cut now better smoothed line in 1980. Which is not THAT bad. Smoothing usually yields weird ends and beginnings for the smoothed line.
More serious was, when the Team cut the proxies in 1980s since they did not show as fast uptick as instrumental record did.
1107454306
Not that anyone has anything to hide…
Good lord, I’ve run out of popcorn.
Guys remember Gavin at Fenton Commmunications I mean Environmental Media Services I mean RealCimate.org says that reposting private emails is unethical so make sure to keep sending these to as many people as you know.
450 Peer-Reviewed Papers Supporting Skepticism of “Man-Made” Global Warming That they Tried to Keep out of the IPCC Report.
1089318616.txt
Don’t let anyone read this either,
The Truth about RealClimate.org
good stuff!
here some of rahmstorf:
From: Stefan Rahmstorf To: Eystein Jansen Subject: [Wg1-ar4-ch06] Ch6-Climate Sensitivity Date: Fri, 01 Oct 2004 11:49:05 +0200 Reply-to: stefan@xxxxxxxxx.xxx Cc: wg1-ar4-ch06@xxxxxxxxx.xxx,
I believe that the people involved in doctoring the data were chosen to push the enviro-socialistic beliefs of the Euro-centric (Both in US/Europe) progressives. The Excel file with all the grant $$$ helped push that along. It seems that this country and Europe are going further into a “global new deal” where health care, carbon taxes, redistribution of wealth and Big Government rule. Wall Street/London did push Obama 3 to 1 for a reason, because they will get the bailouts and government projects, brought to you by John Q Taxpayer. It’s to bad the Scientific and Political sides of the AGW debate have been melded together. Regardless if AGW is occurring or not, the politicians and money powers of the world have simply co-opted the Science over Politics. This story is nothing more then a symptom of Supra-Internationalism seeking to destroy the National and Economic Sovereignty of Nations.
Our little Gav is getting worried?]
End of supercillious special pleading
He never suggested malfeasance he just showed that you were all rubbish at statistics. But never said that.
You censor so I post somewhere else.
You could not punch your way out of a wet echo.
Trick or Cheat?
So AGW is man made. We knew this, didn’t we? Now the rest of the world knows which men made it.
I say, AGW is dead!
The Warmers at giss and at cru
Cooked up data to make hockey stew
The trick was to hide
How the warming had died
Hero hacker put the lies in full view…
Should I stay or should I go.
Our lovely boy Gav is still up he has even allowed critiscism. Maybe he just wants to show what a regular guy he is?
Where there is smoke, there is fire. Gore & Hansen, tied to CRU by IPCC, thier proclamations are likewise inflammable.
How’s that hot seat working out for you, Al?
Now this is interesting from Tim Osborn (1214229243.txt) :
So apparently if you organize a campaign to get FOI requests submitted, they can deny them all claiming they are designed to “inconvenience” or somehow harass them regardless of the merit.
I actually got a post through over at Real Climate. I was responding to a previous poster’s comment:]
I have to give gavin some credit. He is allowing posts from previously banned posters and trying his darnedest to reply to them (albeit not very well!)
AGW for Rahmstorf = Lier like Al Gore!
I know the “hide the decline” email is getting lots of notice (and it is noteworthy), but I am even more interested in the “delete the emails” instructions given. This looks to be evidence of conspiracy to obstruct. Where are the lawyers among us?
Michael Mann via UK Guardian:
.”
Haha, if Anthony had a quote of the week, this would be it.
From: Stefan Rahmstorf To: Keith Briffa Subject: comments on Briffa, last millennium Date: Thu, 13 Jan 2005 19:15:25 +0100 Cc: Jonathan Overpeck , Eystein Jansen
Dear Keith,
you’ve done a great job on the touchy subject of the last millennium, which is central to our whole chapter. My comments to that are threefold: (1) If you could shorten the text somewhat, it could become more powerful (2) Some small edits & comments are in the attached doc (3) I propose some improvements to the figures as follows. – Fig 1a the land temps seem to go off plot, temperature scale needs to be extended – we need a break between panels a and the rest, since it’s a different time scale on the x axis – Fig 1c also has one curve going off the top – Panels 1b-d might run the time axis up to 2010 or so, else the important rise at the end is hidden in the tick-marks and less obvious than it should be – the legends need to say what the baseline period (zero line of y-axis) is (hard to find this in the axis label) – this baseline should be the same for all curves, i.e. 1961-1990. Fig 2d says 1901-1960 – it’s not ideal to have a different one, as compared to Fig 1. Also, is it true? Surely the Storch curve is not shown relative to this baseline, it’s way above it. Aligning it like this could lead to the dangerous misunderstanding that Storch suggests a much warmer medieval time compared to everyone else, which of course is not the case.
I hope this helps.
Cheers, Stefan
BBC Removes all comments to Black’s blog:
“Update 2309: Because comments were posted quoting excerpts apparently from the hacked Climate Research Unit e-mails, and because there are potential legal issues connected with publishing this material, we have temporarily removed all comments until we can ensure that watertight oversight is in place.”
Time for those of us in the UK to seek an FOI request for that e-mail?
From: Stefan Rahmstorf To: David Rind Subject: Re: 6.5.8 revisions Date: Fri, 14 Jan 2005 12:20:47 +0100 Cc: Tim Osborn , Jonathan Overpeck , Keith Briffa , Eystein Jansen , FortunatJoos
Hi David,
thanks for the detailed response. I’ll try to be brief.
On the orbital forcing you write:
The point here is that climate can be forced by other factors than simply a global,
annual average radiation change, which is the metric now being used.
I think we all agree on this point. My concern is only about how to present it in the
section. I think that giving a climate sensitivity wrt. global mean orbital forcing is
confusing to the uninitiated, e.g. your statement in the section:
This high climate sensitivity (2�C/ Wm^-2) is occurring in an atmospheric model
(ECHAM-1) whose sensitivity to doubled CO[2] is about 0.6�C/Wm^-2.
I really think we should not give a number like 2�C/ Wm^-2 as “climate sensitivity” to
global-mean orbital forcing and contrast it to that to doubled CO2. It gives out the
message to people that climate sensitivity is all over the place and ill defined. That’s
not the case. Climate sensitivity is a well-defined concept for a globally uniform forcing
like CO2 forcing, but nobody expects any clear relation between the global mean part of
orbital forcing and the climate response.
Jones: You want answers?
McIntyre:I think I’m entitled.
Jones: You want answers?!
McIntyre:I want the truth!
Jones: You can’t handle the truth!
McIntyre:Did you order the code changed?
Jones: I did the job I had to do.
McIntyre:Did you order the code changed?!
Jones: You’re God damn right I did!
Still a few good men. Thankfully.
The leak will only help if it is widely reported. If the MSM refuses to report the leak, the general uninformed population will never hear about it and will still follow Gore.
@Ron Cram
Ron a good observation “hide the decline” allows debate.
The deletion of emails another aspect which of course stinks of conspiracy.
But lets deal with morality. If someone has died no normal person would take a perverse pleasure in their death would they?
But these people did and this suggests they have no moral base.
I saw mention of the leak on foxnews website and quickly looked on WWUT, but for people who refuse to look at Fox, they may never hear about the leak. We need to tell one and all.
Paul Husdon’s BBC blog is still open and there’ve been some relevant posts on it.
Where are the warmers hiding? The TWO Community Climate forum is a hotbed for them at the moment I think.
BBC is wading in now with loads more warmist cr*p. Looks like they’re calling everyone into the office!
Jeremy (14:21:51) “Dr. Jones et Al. it is time to do the honorable thing when caught red handed as a cheats and a liars. PLEASE RESIGN before you are all FIRED.”
A tenured professor getting fired? Perhaps you are overlooking what allows them to lie.
The knives are out for the BBC politically, especially in other, non-public service media outlets. I’m not surprised they’re being ultra-cautious on this. I wouldn’t read anything into it other than that.
Over thousand comments on a single publication within 14 hours time!
WUWT is a breaking records!
LittyKitter (14:59:43) :
Is it me or have the BBC just taken all comments off Richard Blacks site?
Richard Black Comments:-
Update 2309: Because comments were posted quoting excerpts apparently from the hacked Climate Research Unit e-mails, and because there are potential legal issues connected with publishing this material, we have temporarily removed all comments until we can ensure that watertight oversight is in place.
Its amazing how the crazies over at reddit.com enviro forums are not even batting an eye at this.. Are they THAT closed? How can this not even get them to question it? Blatant statements are being ignored or downplayed.
I too was surprised at the comments being allowed on RC.
I was even more impressed at how manfully (no pun intended) Gavin was defending his POV.
Love him or loathe him, he’s a born fighter.
I hope that he doesn’t end up single handedly manning the barricades!
Posted this to RC. Wonder if it gets through and what the answer might be?
Gavin, is there ANYTHING in the emails that indicates untoward behaviour or other unscientific behavior on the part of email authors or recipients…or is it all just one big misunderstanding on the part of those who are criticising? Is there anything?
Great work but until a national newspaper or tv and radio station covers it we are still unable to get the message home.
At least in America the Wall Street Journal has done a good piece on Steve McIntyre…. and 200 people demonstrated outside of a recent Al Gore lecture.
Cracks showing perhaps but it has got to become mainstream news.
”
Related to my earlier post, for example no mention of the leak on the home page of CNN.com, MSNBC.com, ABCnews . Again we need to send out emails to everyone we know.
Fame at last.
Tree ring controversy….
]
Richard Mc Gough (14:32:08) provided this excellent searchable link
I typed in Watts and came up with the above.
This is the CC equivalent of the fall of the Berlin Wall, can’t think of anything else to talk about at the moment., I am driving my wife crazy !!!
I think it’s pretty obvious by now that CA is under a DoS attack, which tells me that somebody somewhere is utterly terrified of what MacIntyre is going to do with all this material…or even certain parts of it.
I once knew someone who had worked at the Mack Truck Testing Lab. He had
collected funny and stupid things that had been put in reports. My favorite, and the one that is apropos to this controversy is: “Through statistical manipulation, we arrived at the desired result.”
Jeremy (14:21:51), I believe in the UK “doing the honorable thing” means shooting oneself. In Japan it would be seppuku.
Breaking News! Anthony Watts found to have surreptitiously softened criticism of bad reporting by sympathetic news source!
Critique 1: Daily Tech is unresponsive and slow:
“Note from Anthony: When the DailyTech first posted this story and referenced my blog as the source of th compilation, without ever interviewing me or asking me a single question, I told them immediately they had it wrong. Shortly after that I published this ”Update and Caveat” (below) on the original post since they were slow to react. All told it took over 8 hours for Dailytech to make a change to the wording, but by then the genie was out of the bottle.”
Shortly after posting the note about Daily Tech’s reporting, Mr. Watts apparently decided that his criticism was too harsh.
Critique 2: Daily Tech is gracious and cooperative:.
Malfeasance! LOL!
BBC have been awful over the last two days – we had a Red “Take Action” severe weather warning for north-west England and south-west England for heavy rain. When the inevitable happened and we had a fatality (a heroic policeman diverting traffic off a busy bridge, which then collapsed with him on it), they had less than 5 minutes on it in their morning broadcast compared to Sky News which devoted practically all their coverage to the dreadful flooding.
Of course they are now talking about it as a 1-in-1000 year event which all their usual connotations, wry looks to the camera.
Insurgent, good, but how about this one:
1107454306
” Ric Werme (14:35:30) :!”
Reminds me of the other usage of “hide”
“Tan me hide when I’m dead Fred
Tan me hide when I’m dead
So we tanned his hide when he died Clyde
And that’s it hanging on the shed.”
Apologies to Rolf Harris
We’ve been conned. The tricksters, liars and frauds, should be held accountable and suffer the consequences for their actions.
“no mention of the leak on the home page of CNN.com, MSNBC.com, ABCnews”
Last time I looked at the ratings Fox draws more audience in most time slots than all the others combined. So it doesn’t really matter if the other nets aren’t carrying it. Nobody is listening to them preach anyway except the chior.
Very very difficult to get any messages through to our Gav. Hope you are reading lovely boy.
The whole stinking lie is now shown to be a smell. I always thought it was peculier that scientists’ would you use the word denier but now understand that none of you are scientists and your attacks were to hide the fraud.
Not only do you promote a falsehood you conspired together to do so.
Come on Gav how many R’s are there in resignation.
geo (14:21:09) asks “If Jones had written ‘address the divergence problem’ instead of ‘hide the decline’ would we be talking about that email at al?”
Yes we would! We would be asking why the tree ring data diverged from the insterment data. We would ask where the insterments where located in relation to the trees. We would question whether the reconstruction was valid through its entirety. We would talk about this e-mail!
geo (15:15:37) :
Um, no, you misunderstand the divergence problem. I shall explain…
The tree-rings are being used, after a sort of weighted average, to determine temperature. The divergence problem is simply the fact that post 1960 or so, many of the tree-rings no longer correlate well to temperature. This implies that tree-rings do not actually respond well to temperature, or at least, temperature is not adequately reflected in their measurement. This cannot be “applied to the MWP” as it signifies the two things (temperature and growth) are uncorrelated (or weakly correlated at best, with a non-linear relationship).
What makes this devastating is that they cannot use reconstructions based on tree-rings to “make the MWP go away.” True, the likes of Mann and others continue to use tree-rings in their reconstructions unabashed, but no amount of screaming can overcome the fact that such reconstructions are worthless.
Mark
Trust me Doug, the sheeple, as we are sometimes rather cutely described, are quite capable of seeking out the information they need in order to inform their opinions. Over the last year or so, I’ve seen the comment sections of MSN pro-AGW arguments full to overflowing with sceptic opinions. This isn’t some coordinated attack; it’s ordinary people like you and I proactively informing themselves, raising eyebrows, expressing their views. The days of media mind control over the general population are long gone.
There have been mentions on instapundit, fox news, and fox business.
also a mention on hotair
doug (16:40:04) says “Related to my earlier post, for example no mention of the leak on the home page of CNN.com, MSNBC.com, ABCnews . Again we need to send out emails to everyone we know.”
I saw it on FoxNews.com. I e-mailed my Senetors and Congressman because they’re going to see some nutty climate bill and need to be prepared to fight me off when I storm Washington.
This all excellent work.
Here’s what I see as especially ugly and of very low ethical standard:
The emails show very clear how the peer-review process was manipulated, a closed shop for insiders only, and having this neatly organized then arrogantly tell the world that the work of Steve McIntyre a.o. has no value, because not “peer-reviewed”. What a misuse.
What kind of personality must one have to arrange such, proceed and “refine” over years?
And all this fraud is done with taxpayers money? And those people influence politician, media and scare the rest of the world?
Isn’t it about time tha a lot of members in these circles draw a line and get back on track before it’s too late? Come on guys, we’ve got you.
Amazing that Jones denies misleading when he clearly uses the word “trick”.
Reminds me to MBH98, but this is a REAL hockey stick!
What is the correct solution to this scandal? If the institutes of higher learning are honest, they should fire all those who are involved. Jones, Mann etal. Wanna bet they keep their jobs? This makes these institutes co-conspirators.
From: “Graham F Haughton” To: “Phil Jones” Subject: RE: Dr Sonja BOEHMER-CHRISTIANSEN Date: Wed, 28 Oct 2009 17:32:24 -0000
Content-class: urn:content-classes:message Content-Type: text/plain; charset=”iso-8859-1″
…, ‘and when Sonja finds out, how will you explain it to her…!’
Graham
Nothing like remaining objective…
Not sure, but this smells like cherry pie cooking:
I posted the following at RC as they don’t seem to be censoring everything just at the moment (it’s a credibility thing . . . )
I think the defence.”
Re getting this to the mass media, I posted the following on the enormous ‘Breaking News’ thread last night. Since it doubtless got buried, I’m taking the liberty of reposting:
/Mr Lynn
Here it is at the New York Times:
The “Nature trick” is not near as damning as some of the things about a deliberate attempt to hinder FOI requests. I am sorry, but, if your data and method cannot be reproduced it is not science. Therefore anything that somes from that supposed data and method is meaningless garbage. It is hard to believe those at the Hadley Centre and elsewhere who have blocked these attempts could pass a high school science class with so poor an understanding of scientific method.
Meanwhile the Guardian has published a ridiculous and embarassing article.
It is obviously nothing but a spin control piece calling for the punishment of the leakers, reminding us of the supposed “evidence for global warming” and deliberately avoiding or oblivious to the entire actual issues. Such as the fact that the evidence they point to is shown by these very documents to have been largely fake.
I still can’t believe these emails can be real.
1123622471.txt
“The use of “likely” , “very likely” and my additional fudge word “unusual” are all carefully chosen where used.” – Keith Briffa
Luboš Motl (14:12:30) :
Dr Phil Jones’ 13.7 million British pounds in grants, seen in one of the XLS files,
……and preachers get accused of fleecing the flock.
Maybe CRU can submit a freedom of information act on the Farmers Almanac so they can compare notes. Both seem to be “reasonably” accurate, however the Farmers Almanac may not be considered “peer-reviewed”. :)
Posted 3 February 2009:
Allan M R MacRae (21:07:14) :
NOW THE WORLDWIDE PRESS IS SWARMING,
‘ROUND ANOTHER
FINE EXAMPLE OF MANN-MADE GLOBAL WARMING!
JUST LIKE THE FAMOUS HOCKEY STICK,
THEY USED THE OLD
“SPLICE TOGETHER TWO DATASETS” TRICK.
***********************
Today’s comment:.
>>So AGW is man made. We knew this, didn’t we? Now the rest of the world knows which men made it.
Ha.
Phil & Michael are on the phone right now with officials from ACORN getting advice.
Darn it, I’m getting withdrawal symptoms. The last time I got on to Climate Audit was ages ago.
Granted that when I do, I understand 10% of the Science but absorb 100% of the integrity. Thanks Anthony for giving me a partial fix tonight but, if it pleases the Big One, bring back CA to me!
Loved reading about SMc from the perspective of Team members BTW; that they fear his intelligence is palpable from their evident hatred and fear that he will turn his gaze upon them.
In the 21st century the establishment antipathy towards the gifted outsider is every bit as strong as it ever has been.
Unbelievable, this is tricky stuff.
“I’ve just completed Mike’s Nature trick of adding in the real temps
to each series for the last 20 years (ie from 1981 onwards) amd from
1961 for Keith’s to hide the decline.”
Perhaps this is just a remark stating that he was trying to test if Mike’s Nature trick did, in fact, hide the decline.
Did they find out if the trick did the job?
Did they then produce results knowing the trick was involved?
Those are the real questions.
Don’t these expressions of outrage ring a bit false? The tenor of comments would indicate shock and awe. All the revealed correspondence really does is confirm what most deniers have been accusing those folks of for years and years, either directly or by implication. What is there to be shocked about?
It seems to me a more honest reaction would be, “Yep. Just as we’ve been saying” followed closely by, “Damn! How could they be so dumb as to leave such an electronic trail of their activities?”
CH
Just posted this on Tips and mentioned recently on the “Hacked” thread, is down. Anyone know why, or has recently visited, to get an idea of when it went down?
OT: Only in the alarmist Globe and Mail in Canada can Martin Mittelstaed, an activist/reporter post this while the rest of the media are ablazed with the emailgate!
“Martin Mittelstaedt Environment Reporter
From Saturday’s Globe and Mail
Published on Friday, Nov. 20, 2009 7:38PM EST
Last updated on Friday, Nov. 20, 2009 7:41PM EST
On the eve of major UN climate change talks next month in Copenhagen, a major survey of Canadians has found that more than three quarters of the public feel embarrassed that the country hasn’t been taking a leadership role on reducing greenhouse-gas emissions.”
And where do you think this guilt spewing poll comes from?
”
Hoggan owner of the racist, delationist blog desmogblog that attacks any scientist whose work does not condone AGW… And already this week the same reporter kindly obliged with a plug on the Hoggan book…
A friend of mine recalled it! The poll was done in april by telephone. They started asking some unrelated questions and then little by little asked questions about AGW, BC Hydro etc… tricksters!
Re: RC’s “change of tack”. There is no change of tack. The release of emails gives them a good opportunity to pump the many “skeptic” red herrings that can be used to caricature the opposition. Most threads allow stupid or angry opposition posts for much the same reason. They also allow easy questions that they can easily answer such as in the current thread. They also allow their own partisans to post lies about how censorship-free their website is.
However they do not allow probing questions about the data and how it has been manipulated. That is the case with the current thread. Future threads will undoubtedly have the same or greater censorship as those processes are reverse engineered and pointed out using the zip file data. At that point it will be old news and they will have moved on.
Calling: Robert M. (14:15:43) :
I believe Tax Evasion might be added to your list!
I just spent the last hour over at Real Climate: the longest I’ve ever been able to bear it (thanks to allowing real diversity of comments for once). One thought that occurred to me while reading Gavin’s justification for keeping McIntyre from getting published is that if climate science had any credibility, they would allow even what they consider to be “poor” papers to be published. In every field a wide range of papers is published, or should be: let the other scientific readers decide if the science stands on its own merit. The Hockey Team, which seems to have controlled the publication process, seems to have really feared the light being shone on their methods by McIntyre. Their credibility is in tatters.
As an academic, I have graded my share of undergraduate papers. While still a grad student myself, I found myself perplexed by student papers which used complex language or arcane constructions which I could not follow..
Allan M R MacRae (17:41:38) Not jail, Allan – sentenced to collect real climate measurements for the rest of their lives at solo manned weather stations in Siberia (my wife’s suggestion – far too lenient I think ).
Stacey (15:05:18) :
.and I posted yesterday that they would claim the e-mails were taken out of context as a defense. They took tree rings out of context also. Jazzed up proxies.
Michael Mann claims to be into math. We all know taking two points and creating a vector is not linear regression analysis. here is no hockey stick.
You can search and read the hacked email correspondence online over here:
Thanks to the ones who put this up.
“Atomic Hairdryer (15:47:11) :
So AGW is man made. We knew this, didn’t we? Now the rest of the world knows which men made it.”
Now, that makes my hair curl…
I said this on CA, so I’m saying it here. The person who got this stuff got a lot more, 60Mb is just a couple of minutes download – heck a thumb drive is Gigabytes. He had to have root rights to get to email. If the person who did this is reading this – I salute you! – I hope you covered your trail, because I’m sure the search is on.
Note to James Hansen, secure your computers.
Do not relent. These frauds must be eviscerated. Show no mercy. Otherwise, they will rise like Mike Meyers in the Halloween series of horror flicks.
Continue to hammer away at them until their fraud has no voice.
We expect truth from our scientists. Anything but truth makes them political tools with cynical agendas.
Please let the adults take back real science (and government).
[To the moderators…great job with the volume today!]
Allan M R …..
No, they belong on a road crew in the American Southwest, during the summer, with picks and shovels making small rocks out of large ones. No shade, and only the water jug (warm) and a portapotty to prevent some lawsuit.
Mike
Comment from Dot.Earth
“If this crime actually has an effect on funding, for example, then I would encourage my fellow scientists to just abandon civil society to its own devices. If society can’t appreciate the vital role of science in addressing societal problems, if this is the kind of behavior scientists have to contend with, then I’d say it’s time to say to hell with society and let it suffer the consequences of ignorance.”
Don’t leave us!!! We need more fraudulent science of teh doomsday!
“Mann” Made and “Philtered” Climate Change…
Red meat
The masking is a ‘fix’ applied to the model
>simulations to adjust them to fit the surface data known to contain
>spurious trends. This is simple GIGO.
This searchable site is great.
searched – “deleted files”
Just plug in words of your choice (this one is “deleted files”) and all the CRU emails with them come up.
Excuse me, that was from the CRU e-mail
“The masking is a ‘fix’ applied to the model
simulations to adjust them to fit the surface data known to contain
spurious trends. This is simple GIGO”
Schadenfreude => ha ha ha ha!
Here’s a good one, about changing”
So now we know what the expression “the old hide the baloney trick” actually means
Thanks RC, Ive often wondered about this myself
the story is now published at Washington Post and NYT
Yeah!!!!!!
Let loose the flood……
The NYTimes even has a link to the hacked emails at
(After 10 yrs of no warming), have the mainstream media become more willing to report the skeptic arguments?
Richard deSousa (17:16:04) :
“What is the correct solution to this scandal? If the institutes of higher learning are honest, they should fire all those who are involved. Jones, Mann etal. Wanna bet they keep their jobs? This makes these institutes co-conspirators.”
Yeah, that’s the real problem–the corruption of institutions. As one Washington journalist famously said, “the real scandal in Washington isn’t what’s done that’s illegal, it’s what’s done that’s legal.”
Gee these emails are a great how to;
How to deal with Steve McIntyre
How to deal with Japan
How to deal with risks that are too low
How to deal with the IPCC
How to work with WWF
Of course, we’re entering the weekend news cycle…the only news to survive through Monday is likely to be the US Senate showdown on Reid Healthcare Bill. I hope I’m wrong and that this CRU story grows over the weekend.
Found this in 1252154659.txt
It is part of an e-mail advising trying to get concensus on answering Steve McIntyre challenges. I would say the author’s tone is concerned and borders on nervous/scared.
Cc: ”
…
“(4) We selected records that showed 20th century warming. …”
Suspect much more unflattering comments exist in all these e-mails.
Jaypan,
Have any of the Japanese news agencies picked up on this massive fraud? The Japanese, in my view, could not fall for this climate nonsense… Or have they?
Mike Bryant
It’s a little more nuanced than that. The mainstream have started responding to the overwhelmingly sceptical comments they receive on their pro-agw articles. The scepticism has been growing for a year or more and reflects I think, public doubts about the science, but more so about the proposed policy responses.
Just wondering how many Soros dollars were circulated to the Journals and other organizations that danced to the music of these corrupt so-called climate scientists…
Mike
This one is good its all about the right message Michael Mann said “we don’t expect to in any way be critical of the paper.”
Peer Review at its finest.
)”
vigilantfish: .”
Very prescient observations…and I agree about listening to Gavin. The word sophistry comes to mind….
Chris
Norfolk, VA, USA
“Tom in Florida (17:59:02) :
“I’ve just completed Mike’s Nature trick of adding in the real temps
to each series for the last 20 years (ie from 1981 onwards) amd from
1961 for Keith’s to hide the decline.”
Tom, if the dendro data is filtered at 50 years the instrument data _must_ also be filtered at 50 years. Have fun. Did they?
Bill Illis (18:21:49) :
.”
Ditto the thanks to Watts, mods, and McIntyre.
This is history watching long-held “institutional” lies unraveling before our very eyes.
Chris
Norfolk, VA, USA
One purpose of the release of these files (if they were released with the knowledge of those whose material the files contain) could be a ‘clear the air’ exercise before Copenhagen.
Similarly, one of the reasons for the ‘fall of the Berlin Wall’ over at RC is that they know the release of these files will be steering a lot of new media attention their way and they want to give the ‘impression’ of openness and reasonability to the world’s eyes now and through Copenhagen.
An old Soviet trick.
. . . I should have said in my previous post that I had attempted to post the Alice in Wonderland analogy over at RC but apparently Gavin’s tolerance for being compared to Humpty Dumpty is low. Oh well.
Michael Mann: “We simply want to do our best to help make sure that the right message is emphasized.”
The “right message”?? Huh??
What is he…a religious evangelist or a scientist??
I don’t care in WHAT context this was delivered, this is about as unscientific an innuendo as can possibly be made.
Chris
Norfolk, VA, USA
I have to wonder if the person who made the hacked data available learned something from Andrew Breitbart’s release of the ACORN videos. Put some out, wait for denials, then nail them with more proof.
Well in any case I can hope. :-)
I have been so disturbed by argument from “the consensus of scientists” that I can’t help relishing this.
I have not had the time to read all the comments and I have to head to reserve duty early in the morning, so….
There is a way that this can get ignored and forgotten. The media and the government will focus on the theft of the data and ignore the content of the data. This happened several years ago when a Republican Senate staffer stumbled across memos and messages between Democrats in the US Senate and lobbyists for NGOs.
The content of these messages and memos was shocking because the NGOs were telling the Senators to delay or block the confirmation of certain appeals court judges for a few years so that lawsuits being brought by the NGOs would be more likely to be heard by judges appointed by Carter or Clinton. This was a criminal conspiracy…but it was ignored. Instead, there was an investigation about how the data was released. Eventually a Republican staffer was fired.
This only works if you are a Democrat. When some political activists intercepted cell phone conversations between Republican lawmakers and political strategists (and recorded them), the embarrassing information was spread everywhere. A Dem congressman even read it on the floor of the House of Representatives. I don’t think any criminal charges were pressed against the couple that intercepted and recorded the conversations. The congressman was investigated by the Ethics Committee and given a mild sanction.
So, there will be a big investigation about how the data was stolen and the content of the data will be forgotten. If the media plays its usual role, this approach may even work.
Why is CA still down? Is it under a DDoS attack? Something is fishy here. This is too much downtime for heavy traffic to be the cause.
I’ll give you another explanation for the sudden “largess” over at Real Climate in allowing more open comment: Gavin is getting ready to throw Phil Jones and Mad Mann under the bus. Going through the emails, there wasn’t much (that I saw) that implicated Gavin much in the way of shenanigans – little bit about about his censuring in RC that deviated from its original apparent intent, but little to actual point a finger at Gavin and yell “you too!”
Gavin has to deflect the criticism, quickly, that he censors debate, especially from “contrarians”, because its a massive theme in the email train, and for the moment, it his only significant exposure (running the blog on company time is an “internal” matter with his employer).
But by allowing the full wave of anger to vent from the skeptic side, especially as it pertains to the Team, he reinforces the distance between himself and Jones et al. He’s quite prepared to let them swing.
[quote]This one is good its all about the right message Michael Mann said “we don’t expect to in any way be critical of the paper.”
Peer Review at its finest.
[/quote]
Obviously you have as much knowledge of the peer review process as a monkey does of manufacturing computer hardware.
If they haven’t been asked to review the paper, and it has passed review and has been accepted for publication, their request is hardly out of line. This is especially true when the whack-jobs [snip] continue to disregard the science behind this. The climate is changing. The causes are unknown, but only an idiot would think that increasing global temperature during the longest solar minimum in a century are somehow *not* indicative of change.
Apparently, over at CRU, they have Greenpeace writing letters for them (872202064.txt):
From: “Wallace, Helen” To: “‘t.mcmichael@xxxxxxxxx.xxx'” , “‘m.hulme@xxxxxxxxx.xxx'” Subject: Letter Date: Thu, 21 Aug 1997 18:21:04 +0100?
Thank you both very much for your time and trouble.
Best regards, Helen
Dr Helen Wallace Senior Scientist Greenpeace UK
Full email at:
I have nothing to add, I just want to be at the party.
Oh, I have to add something? OK, this is my bit.
Forget “trick”, that’s ambiguous. Concentrate on “hide”. No legitimate academic exercise hides anything.
I’m still waiting on someone to talk about the ostensible subject of this post – “stagnating temperatures”. What the heck is that supposed to mean? Every 5 year period in the last 12 years has a positive rate of rise for global mean land temperature. That’s not a coincidence.
Tricky Dick must be rolling over in his grave!
To restate the importance of Jones’ trick:
The problem is not in calling a cool technique a ‘trick’.
The problem is in using a cool trick to cover up the truth.
Jones used his tech to hide the truth, and to mislead policy makers.
Gavin may be, to his credit, demonstrating that he, unlike his colleagues, has integrity.
Hugh – it’s our job to make sure that doesn’t happen.
Erik,
Who said anything about coincidence?
Even more, why is a 5 year period so important now to you?
Is there some astronomical import to a 5 year parcel of time?
And please do let the folks at Hadley know – they seem very concerned about the flat temps.
I bet people like Curt in this Mann scolding doesn’t like the treatment he got.
M Mann to Curt Covey
”
HEADS UP REMINDER
That link takes you to a searchable site where you can search the e-mails with any name or phrase.
Hugh,
You pegged it.
The NYT seems to be taking exactly this tactic.
Here they have the ability to do something they have not done in years – actually report the news- and they won’t because it is stolen. Not that the stolen CIA secrets they published about lawful programs, that directly put Americans in harms way gave them any problems.
The good news is that cap-n-tax seems to be dead, which means Copenhagen will only be a Christmas shopping trip.
Michael Mann explaining that scientists often use the term “trick” to refer to a “a good way to deal with a problem” reminds me of a former president of the University of Colorado explaing that at one time “cunt’ was used as a term of endearment.
It would seem that the “warmmongers” who are posting here are the sort of people who, when smacked along side the head with a wet mackerel, would complain that it wasn’t a trout. Dead fish smell alike, and there were significant crimes perpetuated at the CRU, and the criminals haven’t stopped since.
kudos to CHRIS S for including what is probably what is one of the greatest dialogues in moviedom.
a question—- did the temps NOT rise in 1998 ? why did the green worm not follow the temp curve. i’m afraid that i don’t understand what the problem is
I teach a 400-level undergraduate software engineering course in which case-studies of software bugs that cause havoc are used to illustrate various strategies for better software engineering practice. Nowhere in the textbook is there mention of software development for scientific modeling. I am now developing a lecture on that topic using this current information on tricky programming techniques coming out of CRU.
As an added bonus, I will also be integrating a classroom discussion on the ethics of hacking vs. tricking – do you think it is justified or not in this case? – etc., etc. – can’t wait to here the students’ responses.
I uploaded the emails from Hadley CRU to my site. Seems that the others were overloaded.
Here’s the link:
Incidentally, there are also documents.
questioner (20:28:03) :
Rather than resorting to personal attack you maybe like to enlighten me to why exactly Mann shouldn’t be critical of the paper in case he finds something he perhaps disagrees with.
“…Can it be that all this added data, information and knowledge has produced no more wisdom then the ancients had? Or have we fallen into the same trap they did? We think we know, or believe we know, what in fact we do not. Is it that we have failed to see the difference between Mythos and Logos, between fact and fantasy?” (from one of my unpublished essays)
I can not say I am surprised but I am more than sad.
@TerryBixler:
I’m guessing you didn’t see the recent video of Senator Inhofe telling Senator Boxer:
“We won, it’s over, get a life!” (That was BEFORE these files were leaked!)
Sweet! On to Victory!
I’m very pleased to be able to report that the Cleveland local Enviro/Alarmist newspaper writer has just written a piece about his continually getting drowned in Skeptic (and some separate abusive) comments to his articles.
Unfortunately, the Alarmists are entrenched here, too. The piece also concerns a local/national noted environmentalist who has it out for “Deniers”.
This article is a reminder of the importance this CRU Hack can play in addressing the many well established alarmist followers throughout the world, and in our backyards. This guy James Powell has national recognition and
just landed a $100 Mil Stimulus award for a “Green” Oberlin College project and also lectures to kids. He’s done some very good work, it seems, but unfortunately took up the AGW cause with a vengeance, too:
Environment, Real Time News, Science »
Meet James Powell, a geologist who says climate skeptics are being duped
By Michael Scott
November 20, 2009, 5:14AM
“Powell claims the ‘denier movement’ actually began around 1992 — the year in which he said media more often began quoting politicians rather than scientists in their climate coverage. He said that’s when large corporations began to pour money into conservative think tanks — which he calls ‘skeptic tanks.’
Powell said his lecture topic, “Skeptic Tanks: How Global Warming Deniers Dupe America,” forms the foundation of his next book, which has yet to be published.
He also offered a wager to skeptics that the earth is still warming despite a slight downward trend in recent years.
“I’ll bet any of them that five years from now our global temperatures will be higher than they’ve been,” he said. “If that’s not true, then there’s something fundamentally wrong with the science and our understanding of it.”
Obviously, no quarter will be ever be given skeptics/realists. Join in, if you like.
Just found this chestnut:
Friday, November 20, 2009
Briffa on problems with tree-ometers
Alleged CRU Emails – Searchable
Date: Thu, 16 Nov 2006 11:57:09 +0000
From: Keith Briffa
Subject: Re: Mitrie: Bristlecones
…
…The main one is an ambiguity in the nature and consistency of their sensitivity to
temperature variations.
…The bottom line though is that these trees likely represent a mixed temperature and moisture-supply response that might
vary on longer timescales.
…
This is also related to the “strip bark” problem , as these types of trees will have unpredictable trends as a consequence of aging and depending on the precise nature of each tree’s structure .
….
hunter (20:47:29) :
Would the NY Times turn down the Watergate tapes if 1974 were today?
Not even. Politically correct only gets you so far as the smell of scandal in the wind. Journalists these days may be on a veggie diet, but give them a taste of blood, and the wolf comes to the surface.
I’ll give the NY Times a bit of room here to expand coverage. Implications on the face of it says that there are more fish to fry in other high places.
This scandal has exposed the BBC environment and science departments complicity and it will be very interesting to watch the BBC squirm.
The insider trading and dealing in ‘fixed’ data and conclusions and the overly cosy relationship between a state broadcaster and a group of scientists may well have major ramifications for the BBC, do they disown their own reporters now or do the BBC hang on and suffer contamination by association?
It seems that Black/Harrabin/Shukman have some explaining to do, its a question of who will be thrown to the wolves, there is no honour among thieves and it will be interesting to see who disowns who.
From Nov. 21’s New York Times article:
.”
IOW “Nothing to see here, folks; move along.”
Unfortunately for the Times there’s this Interwebs thingy to let a broad swath of the public see for themselves the chicanery here. The story is on Drudge and Fox News, and Limbaugh’s already referred to it. The media moonbats will have a very hard time ignoring it—even if the NYT pooh-poohs the revelations.
You can bet that Sen Imhofe will be stuffing the emails down Barbara Boxer’s throat.
[i]test[/i] test
[quote]test[/quote] test
Off Topic Here, but I could use a little assistance on this one. Since the start of ClimateGate today, I have been discussing back and forth with my RC-Koolaid drinking father. He is pulling all kinds of straw-men out of his hat. The latest I post here with a question to you all for a little assistance with what he writes. I did my best in my first response to him, but I have limited knowledge of some of the statements he makes. Anyone willing to give a hand here?
Excerpt:
I just watched this video:
Did Mr. Watts really try to have YouTube remove it?
Has Mr. Watts really appeared on Glen Beck’s show?
Was his study of US weather stations really published by the Heartland Institute?
Is it true that the 70 stations given high marks by Mr. Watts still replicate the overall network results?
Have you read this NOAA response to Mr. Watts book?
TattyMane (19:56:02) :
“. . . I should have said in my previous post that I had attempted to post the Alice in Wonderland analogy over at RC but apparently Gavin’s tolerance for being compared to Humpty Dumpty is low. Oh well.”
Nominated for post of the day.
This will no doubt this post will get lost in all the excellent posts above and still to come.
It seems to me that the Editors of Science, Nature and other journals that dabble in climate matters are in this up to their eyeballs and have been a part of and totally complicit in perpetrating this entire scam on the world of science, the public and the political systems of every western nation.
Had these editors stuck to their supposed policies of requiring complete documentation and archiving of every aspect pertaining to every paper presented by the HADCRU / GISS Team so that papers were totally open to scrutiny by all comers and could be checked and verified in an open forum then this situation would probably never had arisen as the Team would have had to come clean with the presentation of their papers for publication.
The editors of those so called prestigious science journals did not enforce their policies if they ever had any, and it now seems that they were also quite open to overt manipulation by members of the Team.
If the science journals wish to retain credibility and they alone are totally responsible for publishing the now discredited papers without any real checks, then the the Editorial and Governing boards of those journals should immediately take vigorous action to remove the offending and complicit editors and completely revamp and rigidly enforce their editorial policies on requiring full declarations and archiving of ALL relevant materials pertaining to a paper.
No action by these science journals just means that this scam will just be repeated again and again and in the long run, the tax paying public’s trust in the integrity of science will be drastically eroded and the discipline of science will suffer an unjustifiable collapse in the public’s regard for it’s high status.
Please note
Disagreement between
Benjamin D. Santer & John Christy
Benny is pissed at John
Seems John doesn’t drink the Kool Aid
1248993704.txt
i have been over to RC and, whilst it was more open than normal, the faithful were hyperventilating. (Even the cheering news of the death of John Daly was reinterpreted
and the critical comments claimed to be evidence of denialist evil).*
I am however, convinced, that we must maintain our courtesy to those who come here with courtesy. We may have a vast number of confused visitors here over the next few days.
Remember, we were all like them once. (At least I was).
Also honesty is always the best policy.
* Look, I can’t explain this either – go there and check it out.
‘The spin doctors
Of climatology
will deny any bias
In their tricky methadology.’
Thanks to Steve Mc., Anthony , Jeff, Finn, et al for your dogged pursuit for real data. Time to put on my fire works display.?
Well i would add a nice Organized crime to this…say RICO?
IANAL but it seems to me that Mr. Mann and Al Gore are quite possibily guilty of conspiring to defraud governments of hundreds of billions of dollars.
I keep hearing the AGW proponents saying these documents were stolen, illegal, etc. They want to fully prosecute, etc. Huh?
WE HAVE NO IDEA IF THESE DOCUMENTS ARE ILLEGAL OR STOLEN. FOR ALL WE KNOW THEY WERE POSTED by one of the people listed in the email completely legally, I think Dr. Jones posted it for example because in a sudden attack of guilt – he turned – and is to embarrassed to tell his friends and thus refuses to acknowledge that he actually did it and thus hides his identity by posting in Russia. Until I know who the poster is and how the docs were obtained I assumed they are legal and the person or person(s) posting them wants to conceal there identity and is probably someone in the emails and responsible for the data and wants us to know the truth. Thank You Dr. Jones.
What will gordon brown do now?.He was relying on green taxes to get the uk deficit down.I’m raising your taxes due to government incompetence doesn’t sound good does it?
“…..I’ve just completed Mike’s Nature trick of adding in the real temps
to each series for the last 20 years (ie from 1981 onwards) amd from
1961 for Keith’s to hide the decline….”
———————————————————
The RC interpretation:
…. Well we used the word “Trick” to cover up the fact that we misrepresented a graph which is “fraud”, but we sorted all that out by redefining “Trick” to mean “Science” and fixed the “fraud” by using our “Science” to get Funding…..
You know it makes sense……:-)
“… I’ve just completed Mike’s Nature trick of adding in the real temps
to each series for the last 20 years (ie from 1981 onwards) amd from
1961 for Keith’s to hide the decline.”
———————————————————–
…..and one way a person could construe RC’s excuses for this strange graphing “trick”.
…. Well they used the word “Trick” to cover up the fact that they misrepresented a graph which is “fraud”, but sorted all that out by redefining “Trick” to mean “Science” and fixed the “fraud” by using the “Science” to get Funding…..
You know it makes sense…..;-)
ref 1258053464.txt
This is one I like…. read what Mann says in the post script
It’s quite obvious from the emails and the posts at RC that there really is only one ‘team’. Don’t they understand that you don’t get to be your own judge and jury in this world?
Dear Anthony,
Pretty please retitle “Tips and Notes to WUWT” on upper right of blog to “Tips and Tricks to WUWT”.
Working on the hockey stick:
And reading all of these blog entries (both here and on Real Climate and a few other places) there seems to be some confusion about what is acceptable conduct in science…… So I’d like to remind everybody of a quote from Carl Sagan…
“The suppression of uncomfortable ideas may be common place in politics or religion…. but it is not the way to knowledge… and there is no place for it in the endeavour of science”
That to me is the fundamental issue…. are these guys trying to suppress uncomfortable ideas?
I think they are and therefore, in my mind, they are not behaving as scientists.
As for whether or not they have committed offences under the law, these are matters that I would love to see tested in a court of law.
We need to focus on fundementals. Apart from the odd troll, everyone here will, for some time, have had a pretty good idea whether Phil Jones or Richard Lindzen is a more credible scientist. Or indeed whether or not RC is more reliable than CA or WUWT.
The important thing to focus on is the fact that the ‘scientific consensus’ is being used to support (ostensibly) a move from a ‘High Carbon’ to a ‘Low Carbon’ economy. And to do this as a matter of NOW, with great urgency, irrespective of how ‘robust’ the science is.
In fact there are some fairly good arguments in researching and in implementing ‘Low Carbon’ energy production when mature and commercially attractive alternatives (without ginormous subsidies) to burning fossil fuels become available.
But, in my book, the worst sin of Jones, Mann and the rest is that they have deliberately panicked the politicians and the media into the belief that this must be done NOW in order to save the planet. Mainly motivated by arrogance and the desire to keep their comfortable, very well paid jobs, lavish research funding, index linked pensions. And, of course the all expenses paid jollies to Tahiti.
Cooler heads in industry and finance – and some more intelligent politicians – may realise this but see enormous potential profits from the carbon trading scam. That and the chance to fulfil their ambition to set up an eco-fascist superstate, a glorified version of the EU, where the ‘Political Elite’ will be able to control the lives of all the rest of us, accountable to nobody.
In that sense, whilst Briffa, Jones and the rest think perhaps that Gordon Brown is just a pawn in their AGW game, the reality is that the ‘Team’ are actually pawns in a much bigger game.
And this game will cause incalculable damage to the real economy, plunge hundreds of thousands even deeper into fuel poverty and destroy hope for millions in the Third World who could be provided with clean water, affordable and reliable energy, education and health (and, eventually, good governance?) for a fraction of what is being proposed to ‘save the planet’.
I wish I believed, as many commentators do, that this hack (if it was a hack) will make much difference. I wish I believed that Jones would get his ass kicked, even behind closed doors. I wish they would be punished for their blatant & barefaced violations of FOI laws. But I doubt it will happen.
The powers that be are far more interested in closing ranks.
>>Here it is at the New York Times:
>>
Ouch. It does not look good for the AGW industry at all. The trouble with losing credibility, is that former friends start leaving in droves. Politicians, who were SO friendly yesterday, will not wish to shake hands with a pending legal case. Better stay on the sidelines until it all blows over.
Mr Mann and Hadcrut might find themselves rather lonely for the next months or two. It does not bode well for a good conference at Copenhagen.
.
I like this exchange – how to combat criticism from ‘deniers’.
Tim, Phil, Keef:, although here, at least, it is already quite out of control…..
Ray
Ray et al
… this whole process represents the most despicable example of slander and down right deliberate perversion of the scientific process , and bias (unverified) work being used to influence public perception and due political process. It is, however, essential that you (we) do not get caught up in the frenzy that these people are trying to generate, and that will more than likely lead to error on our part or some premature remarks that we might regret.
Keith
Guys,
So the verification RE for the “censored” NH mean reconstruction? -6.64
I think the case is really strong now!
What if were to eliminate the discussion of all the other technical details, and state more nicely that these series were effectively censored by their substitutions, and that by removing those series which they censored, I get a similar result, with a dismal RE.
Thoughts, comments? Thanks,
mike
Endquote:
.
More like politics than science.
And lots of the newer emails are defending themselves against WUWT and CA. It just goes to show how much pressure these sites have put these people under.
.
Mike’s Nature is to Trick.
Quote:
I’m really sorry that you have to go through all this stuff, Phil. Next time I see Pat Michaels at a scientific meeting, I’ll be tempted to beat the crap out of him. Very tempted.
How very scientific (Michaels is a skeptic.)
.
Bernie Madoff wanted to hide a decline too.
He also got away with it for years.
Quote:.
Indeed he does.
As an aside, this looks like leaked data rather than a hack. A great deal of the information is very specific to McIntyre and WUWT, which shows an interest in the various debates about withholding data and Yamal trees. Unless, of course, the emails we have been given have been selected to include this very topic.
.
And I see that all this research is not science, but a CAUSE !! AGW was and is a purely political CAUSE, just as I always thought.
Quote:
Sounds like you guys have been busy doing good things for the cause.
If Greenpeace is having an event the week before, we should have it a week before them so that they and other NGOs can further spread the word about the Statement.
.
Er, these emails were stolen. Why are you displaying and distributing stolen goods?
In an undisclosed location, under HOT lights and with smoke and mirrors, an interrogation: “I have no recollection of the murder 10 years ago [of climate science] and have no idea what I meant when I pulled the trigger”.
Alleged CRU Emails – 1051202354.txt
Can you believe the arrogance of this Mann.. >>> >>>______________________________ >>> >>>This second case gets to the crux of the matter. I suspect that >>>deFreitas deliberately chose other referees who are members of the >>>skeptics camp.. >> >>______________________________________________________________ >> Professor Michael E. Mann >> Department of Environmental Sciences, Clark Hall >> University of Virginia >> Charlottesville, VA 22903
“Climate scientists accused of ‘manipulating global warming data'”
“One email seized upon by sceptics as supposed evidence of this, refers to a “trick” being employed to massage temperature statistics to “hide the decline”.”
ralph (02:17:16) :
>>Here it is at the New York Times:
Nice to see it’s not hidden away, go to home page find science and it’s the top story
Bhanwara,
The question of how they came to the public has not, in fact been established.
The people who are outed by them claim they were stolen.
But even if they were stolen, they were dumped into the public square, and so are in the public domain. That means that those who did not allegedly steal them are free to use them.
But on a personal note, is it not a bit pitiful that your only interest in the e-mails is that they were allegedly stolen?
SandyInDerby (16:26:27) :
How to flatter the BBC and get a reporter fired:
1111085657.txt
At 12:48 17/03/2005, Michael E. Mann wrote:
Hi Phil,
Yes, BBC has been disappointing in the way they’ve dealt with this–almost seems to be a contrarian element there.
Do you remember the name of the reporter you spoke to?
Thanks,
Mike
………………………
It’s so convenirnt having all the emails unzipped in a file. You can search so fast. Try searching “splice” to see if you agree with the RealClimate assertion that the team would NEVER splice two different data sets together. There’s more than one.
Then search WWF to see how money seems to be transferred for favors with the IPCC. Search Greenpeace for special pleading.
Try 1051230500 or search “referee” to see how referrees were eliminated or added to enhance the chance of team publication; then devise a search to look at how the souls of managers/owners of a journal were bought; then have a look at how papers submitted too late for inclusion in the IPCC were “rebadged” as Steve reported long ago. It’s all in there.
What gets boring is the repetition upon repition of the same band of merry men. One of the women was not so merry. Try Sarah Roper at 0932773964.txt for language unbefitting of a lady (bully).
Bhanwara (04:05:04) :
> Er, these emails were stolen. Why are you displaying and distributing stolen goods?
What evidence do you have that the emails were stolen?
If the person who originally released the emails and documents had accessed rights to them then all of these documents have simply been “leaked” and not stolen.
There is long history of MSM reporting on leaked documents, in fact in the UK, one of the major stories this year has been about the leaked MP’s expenses. There has been no outcry or call for the “leaker” to be brought to justice because the contents of the leak was so extraordinary.
Until evidence surfaces to the contrary I am going to assume that who ever posted this collection of emails and documents had a legal right to them and therefore did not steal them.
Bhanwara (04:05:04) :
Er, these emails were stolen. Why are you displaying and distributing stolen goods?
Er, why do you care? Er, now go back to your, er, troll cave.
“Bhanwara (04:05:04) :
Er, these emails were stolen. Why are you displaying and distributing stolen goods?”
Does publically funded mean anything to you? Stolen, my adz!
it would be interesting to know what the CRU people search the emails for….
Mark T (16:59:32) :
Um, no, you misunderstand the divergence problem. I shall explain…
The tree-rings are being used, after a sort of weighted average, to determine temperature. The divergence problem is simply the fact that post 1960 or so, many of the tree-rings no longer correlate well to temperature.
++++
I’m under the impression “no longer correlate well to temperature” means they no longer show the increased growth rates they’d expect from increased temperature. If that’s true for a warmer last quarter of the 20th century, why wouldn’t it also be true for a warmer WMP? At least, if warmth beyond a certain point is what is actually causing the issue. Because if it is, it seems likely to me that the WMP is undersized in the dendro record as well for the same reason.
But, as I said in a post a little further upstream, I’m not a fan of the dendros until they really *know* what causes the divergence problem and then additionally can convincingly display where else in the pre-instrumental record such conditions existed and correct their reconstructions for it in those eras and locales as well.
Not that “the divergence problem” is the only mountain the dendros have to climb, as Steve McI has convincingly shown time and again.
From: Phil Jones To: mann@… Subject: Fwd: CCNet: PRESSURE GROWING ON CONTROVERSIAL RESEARCHER TO DISCLOSE SECRET DATA Date: Mon Feb 21 16:28:32 2005 Cc: “raymond s. bradley” , “Malcolm Hughes”
…
PS I’m getting hassled by a couple of people to release the CRU station temperature data.
Don’t any of you three tell anybody that the UK has a Freedom of Information Act !
Bhanwara (04:05:04) :
Er, these emails were stolen. Why are you displaying and distributing stolen goods?
———-
Looks like the team has decided on how to respond to this development.
Stolen? Hahahaha.
I think you’ll find that COPIED is the correct terminology.
In fact; if anyone concerned were to accidentally lose any of it, they now have many convenient online backups at their disposal.
The Washington Post article was written with total sympathy for the comments of Mann and the other culprits in this episode. The NYT & Boston Herald were far more critical. I don’t think Andrew Revkin appreciated public emails that associate him with scientists who lack integrity.
Thanks for mirroring this. I’ve added your link to my post about this matter.
Am I right about the following?
“Mike’s Nature trick” entails the following theoretical proposition:
– Smoothing/padding the proxy data with instrumental records is equivalent to smoothing/padding it with the proxy data which will become available in the future.
We are now 11 years out from the padding/smoothing trick that produced the WMO graphs.
To test the theory “Mike’s Nature trick” is based on:
the proxy record must be made current (updated to 2009);
the application of the instrumental record should be shifted forward by a corresponding number of years;
new graphs should be generated;
and they should exactly match the 1998 graphs.
A failure to match would be dispositive.
And furthermore, without experimental verification why should any scientist consider “Mike’s Nature trick” valid?
Squidly (22:18:31)
Did Mr. Watts really try to have YouTube remove it?
Has Mr. Watts really appeared on Glen Beck’s show?
Martin Brumby (02:13:35) :
You mention urgency. We have a tight economy It is always tight in at least a few ways. Urgency is a closing tool. When a scientist closes the deal and tries to get funding, they don’t want to await a verdict in 2 years and funding in 3. Too many distractions cause deal breakers. We have to “act now because 30 days may be too late” is a used car selling technique. From this past week forward, the closing cycle for funding research that is tied to warming and planet issues will slow way down. My friends in corporate budget roles are making 2010 budget meetings this month and early December. They sure don’t want to be embarrassed with inniatives that are clouded with scepticism. If a crank like Man walks into dupont, Monsanto, Pfizer akso or another biogenetics or related firm, they will stall on funding weather research and forcasting. Trust me, they want very much to develop seeds that are drought resistant and pest resistant. But they can’t trust these voodoo scientists any more.
Folks, there will be no jail time Not a criminal case.
There won’t be criminal charges for violating FOIA laws. Maybe a reprimand or demotion
Mann and a host of other names will be treated like they have a disease when they drag in a proposal seeking funding for research. They will not be told no. They will just get a run around. James Hansen speaking invitations will dry up.
McIntyre has the case. He was denied freedom of information access and there are several e-mails bragging about interference.
The punishment is the free flow of funding will drop drastically. and it should. Dirty research methods hurt everyone.
“The days of media mind control over the general population are long gone.”
I’ll believe that when people stop voting republican or democrat, or labour or conservative.
There are still a sizeable number of heavily conditioned people out there who the thought of questioning what they see on the BBC/CNN/NBC/CBS etc. never even enters their minds.
They still believe that we live in a democracy in spite of all the evidence to the contrary. Our leaders are selected, not elected. The people are given a choice between globalist agenda supporting puppets.
People are waking up to the lies and misinformation, but nowhere near quickly enough.
another location to download from
They want to use climate in order to regulate our lives. California passed a regulation last week that limits our choice of large screen TVs in order to “conserve energy”. If a single modern nuclear plant were built, more energy would be generated that this regulation would save. Electricity would be cheaper. Build six of them and electricity costs in California would plummet. But they create a climate crisis and an artificial energy “shortage” by refusing to build generation and blocking access to local energy resources and use that as an excuse to regulate the living daylights out of our lives.
These “climate scientists” would have been in an extremely powerful position for those wishing to manage practically every aspect of our lives as the ADAM draft pdf shows.
I have a different idea.
Well it’s on MSN at last.
There is far too much damning evidence for this one to be swept under the carpet and once public opinion really gets into gear the AGW scam and the attempt to force world government on is toast.
What’s been proved to be going on at UEA is just the tip of the ice-berg. I’m sure lots more insider info will be coming soon.
“trick” semantics. Isn’t this like “That depends on what your definition of is,..is.”
Well it’s on MSN at last.
There is far too much damning evidence for this one to be swept under the carpet and once public opinion really gets into gear the AGW scam and the attempt to force world government on us is toast.
What’s been proved to be going on at UEA is just the tip of the ice-berg. I’m sure lots more insider info will be coming soon.
Well it’s on MSN at last.
There is far too much damning evidence for this one to be swept under the carpet and once public opinion really gets into gear the AGW scam and the attempt to force world government on us is toast. A tipping point has been reached in the AGW debate.
What’s been proved to be going on at UEA is just the tip of the ice-berg. I’m sure lots more insider info will be coming soon.
Where the real danger for the Alarmists lie is in the documents that were presented to Congress to debate the various issues at hand (ex. the now infamous Inhofe Hearings). As Barry Bonds and Clemens are now finding out, those hearings are not friendly get-togethers. If the information Mann et al presented to InHofe’s Committee contained fraudulent information, and if grant requests contained fraudulant information, those people can be indicted.
Anthony:
Sorry:
I can not.
Write a comment.
Without breaking the law somewhere on the planet.
Regrettable.
An inconvenient trick…
Dr. Jones himself described Mann’s Nature trick as hiding ‘uncooperative’ proxy data. Gavin cites Nature as proof that Mann did not hide them. Gavin is in a tricky position indeed.
These revelations will not be universally happy for all the good guys, as the producer & director of “Not Evil, Just Wrong,” at substantial expense will now be compelled to rename their movie to “Mendacious, Malevolent & Wrong.”
Has this one been spotted yet?
“I’m sure some people will use CRU TS 3.0 to look at 2003 in Europe so we
need to be happy with the version we release.”
Cherry picking, no doubt about it
Original Filename: 1252090220.txt | Return to the index page | Permalink | Earlier Emails | Later Emails
From: Ian Harris
To: t.osborn@xxxxxxxxx.xxx
Subject: Re: Hopefully fixed TMP
Date: Fri, 4 Sep 2009 14:50:20 +0100
Hi Tim
I’ve re-run with the same database used for the previous 2006 run
(tmp.0705101334.dtb).
/cru/cruts/version_3_0/update_top/gridded_finals/data/data.0909041051/
tmp/cru_ts_3_00.1901.2008.tmp.dat.nc.gz
Is that any better? If not please can you send the traditional multi-
page country plots for me to pore over?
Cheers
Harry
On 3 Sep 2009, at 17:04, Tim Osborn wrote:
> Hi Harry and Phil,
>
> the mean level of the “updated-to-2008” CRU TS 3.0 now looks good,
> matching closely with the 1xxx xxxx xxxxmeans of the earlier CRU TS 3.0
> and
> CRU TS 2.1.
>
> Please see the attached PDF of country mean time series, comparing
> last-year’s CRU TS 3.0 (black, up to 2005) with the most-recent CRU
> TS 3.0
> (pink, up to 2008).
>
> Latest version matches last-year’s version well for the most part, and
> where differences do occur I can’t say that the new version is any
> worse
> than last-year’s version (some may be better).
>
> One exception is the hot JJA in Europe in 2003. This is less
> extreme in
> the latest version. See attached PNG for a blow-up of France in JJA.
>
> I’m sure some people will use CRU TS 3.0 to look at 2003 in Europe,
> so we
> need to be happy with the version we release.
>
> Perhaps some hot stations have been dropped as outliers (more than 3
> standard deviations from the mean?)?
>
> But I’m not sure if that is the reason, since outlier checking was
> already
> used in last-year’s version, wasn’t it?
>
> Does the outlier checking always check +-3 SD from xxx xxxx xxxxmean (or
> normal),
> or does it check +-3 SD from the local mean (30-years centred on the
> value) which would allow for a gradual warming in both mean and
> outlier
> threshold?
>
> Cheers
>
> Tim
Just two thoughts on this: First, since the official statements all have to do with “theft” or “unauthorized release”, etc., that strikes me as a strong indication that the material is real. If it was fabricated, they would be making noise about that. And, as many court cases have shown, it’s very hard to get rid of e-mails completely.
Second, just imagine internal e-mails from a drug company talking about using a “trick” to “hide” some aspect of the data from a clinical trial of a new drug. There is no connotation in which that would not be viewed as evidence that data were being falsified. It’s the stuff of an attorney’s dreams.
Claude Harvey,
As a skeptic, I have been called ‘paranoid’ and much worse for wondering if the coordination of the AGW promoters was not coincidental.
To find out that many of those who have been calling names the loudest have in fact been coordinating the massaging of data, the suppression of counter evidence, the destruction of fellow scientists, the corruption of the peer review process, the falsification of the IPCC, the misleading of governments, the avoidance of legal requests to release data, etc. etc. etc., is a bit shocking.
The question that comes to mind is this: are you OK with having world climate policy controlled by the claims these guys make?
Gordon (10:05:05)
You beat me to the punch. But then, great minds usually travel on the same paths.
If I am reading some of these emails correctly, it seems like Mann may have lied to congress during his 1999 testimony especially when you look at this trick in the context of trying to persuade congress.
Go ahead Gavin, throw him under the bus. Oh yeah, it’s all taken out of context.
btw, i just tried to post this over on RC, got deleted :-(
We shouldn’t jump to conclusions. Someone who illegally obtains files or who would distribute illegally obtained files would probably not have any scruples about misconstruing them for political purposes. Discrediting science may serve short terms goals for some but it could end up a disaster in the long run. My father taught me several math “tricks” for doing calculations and finding errors. A trick in that sense means something different than trying to trick someone.
I’ve always used NASA’s data which is based on measurements back to about 1880. The tree ring data was used to try to estimate temperatures back past measured values. There is a “decline” in fossil “data” as you near the present – for obvious reasons. The measured values were apparently “added” to the graph – but not to other numbers – to bring the graph up to the present. Who really knows what Mann was referring to – except Mann himself? I’m sure there are many who will not listen to or accept even a reasonable explanation from him.
Mann “However, their theft constitutes serious criminal activity. I’m hoping that the perpetrators will be tracked down and prosecuted to the fullest extent the law allows.”
I hope ‘the Mann’ realises that the prosecutor and police might be coming for him and his workmates also. (Obtaining public funds by deception/fraud). The UK MPs were ‘flogged’ for exagerating their expenses. These guys, IMHO, did the same thing or worse.
Jimbo
Henry chance (08:20:09) :
Criminality? Try copyright.
1237496573.txt
Part quote:
“With many papers, we’re using Met Office observations. We’ve abstracted these from BADC to use them in papers. We’re not allowed to make these available to others. We’d need to get Met Office’s permission in all cases”.
We can’t stop in our fight on the cap and tax, it is not dead yet as long as they can bribe /get 60 senators to vote for something like the healthcare debacle when polls are showing most in the US are against the so called bills. Enough bribes from Harry, Nancy and Soros, they could still pass cap and tax.
I’m sorry, but unless there’s a bevy of printed correspondance out there in the scientific community – preferrably among these folks in particular – referring to “trick” as if it is a standard terminology for a benign procedure, it’s pretty impossible to defend it’s use as such.
What’s up with that? Is this the smoking gun? trend_profiles_dogs_dinner.png?
Chart shows no Human Induced Global Warming, shows Global Cooling?
hunter (05:08:12) :
Bhanwara,
The question of how they came to the public has not, in fact been established.
The people who are outed by them claim they were stolen.
Well, even if made public against the will of the writers, that’s a long way from “stolen”. Just one hypothetical: A systems admin (or even an authorized contractor) goes through orientation and is told “At any time email may be subject to FOIA or other publication. It is not your private communications.” Said person is AUTHORIZED to work on the machine, and sees evidence of what they believe is an immoral and perhaps illegal deception.
Since vandalism for the greater good is now allowed by precedent under UK Law this authorized person may take the data they are authorized to access and make it public for the greater good.
The only action that I see to be brought against them would be by their employer for violation of some corporate policy or other. (Though even that might well be set aside “for the greater good” as evidenced by Hansen still being employed after being arrested in violation of NASA ethics guidelines…)
Petard, meet hoist…
But even if they were stolen, they were dumped into the public square, and so are in the public domain. That means that those who did not allegedly steal them are free to use them.
In fact, since the FOIA training ought to have included warnings about “no expectation of privacy” there are likely lots of documents disclaiming any “private” nature to the email record. Good luck trying to assert an expectation of non-publication with training documents floating around declaring the expectation of future publication…
But on a personal note, is it not a bit pitiful that your only interest in the e-mails is that they were allegedly stolen?
Not only that, but an incredible naivete about the fact that email is a substantially permanent and frequently public record…
BY LAW in the USA you must keep it for between 5 and 7 years (varies a bit by type of company) and maybe longer. That is the legal minium. It must be made available for “Discovery”. It must be made available to any lawmaking body (may require subpoena). It must be made available to law enforcement (again, usually with a subpoena, or management approval). It is almost universally available to the “line management” above any given person. It is available to all the geeks who work on the box (subject to sufficient ‘rights’) and the systems admin may be REQUIRED to search email for particular types of communications (i.e. H.R. can request “harassment” language searches ) and corporate security operations can request full transcripts of anything. Oh, and legal gets to read anything they find interesting too… And more…
So once again: Good Luck demonstrating that a ‘reasonable expectation of privacy’ exists… And if there is no reasonable expectation of privacy …
And folks wonder why I’m not keen on using email…
It is NOT subject to the same privacy expectations as paper mail… nor the same access restrictions. It IS substantially an open book.
I’ve been the Director of I.T. at companies. I’ve reported to the V.P. of Legal. I’ve done the email dumps, and providing. I’ve been through the SARBOX legal training. I’ve worked in the security department of financial firms and I’ve been through the FBI checks for same. And I’ve even been the email admin personally. If you don’t think the “staff” can wander through your email (and is often required to do so…) then you have no clue. None.
The only reasonable assumption when writing an email is that it will be on the front page of the local newspaper AND enshrined in Google FOREVER.
It is interesting to watch the back&forth on this one.
Were the emails stolen? There is no evidence of theft, only the presumption
and statement of theft by Mann et al. The emails could have been leaked.
Or the emails could have been released to a party that had a successful
FOIA request, and that party could have put the on the Russian server.
Were the contents even capable of being stolen? The emails and documents
all appear to be non-privileged communication between public servants.
The data is PUBLIC and should have a zero, or near-zero access cost.
As such, this sort of data cannot be stolen. What appears to have happened is the active interference with FOIA requests has side-stepped.
The emails appear to show a conspiracy to defeat FOIA requests. That
conspiracy alone is a crime, regardless of other fraud and conspiracy
crimes committed earlier.
Prepare your mind and consider the “Brogan Case” implications. Any
false communication directed to a US Federal official, in written or
oral form, under oath or not, is a straight felony. (In the sense that
is is not a “wobbler”, it cannot be plead to a misdemeanor.) Anybody
in the AGW liar camp that testified before congress, or even wrote a
note and sent it to a congressional staffer, where there was intent to
deceive, has committed a felony.
It was Brogan prosecution that put Scooter Libby and Martha Stuart in
jail. There was no underlying crime, but lying to investigators checking
for a possible crime, was a crime.
For the motivated Federal prosecutor, there is a lot to work on.
Jimbo (16:14:38) : “Mann: “However, their theft constitutes serious criminal activity. I’m hoping that the perpetrators will be tracked down and prosecuted to the fullest extent the law allows.” ”
Is this from the same man that had no problem helping to defend vandals in England because he blieved their cause was just?
I’m wondering if declineseries.pdf shows the divergence between dendro and surface temp data?
All Your Emails Are Belong To Us
crosspatch (09:17:47) :
‘But they create a climate crisis and an artificial energy “shortage” by refusing to build generation and blocking access to local energy resources and use that as an excuse to regulate the living daylights out of our lives.’
Heck of a business model isn’t it’?
So Mann’s solution was to use the instrumental record for padding, which changes the smoothed series to point upwards as clearly seen in UC’s figure (violet original, green without “Mike’s Nature trick”).
So… the thermometers are lying?
Shurley knot,
No, to him, the trees were lying. He needed a surrogate because the trees didn’t cooperate. He used poor judgement, cherry picked from another data base, used whatever data he could find to support his conclusion, he lied. Its your choice. But its OK, he was on a mission to save the planet.
If a few cases, the thermometers actually were lying. The temperatures at the Tucson Airport were measured for a few years with sensors that were later shown to be biased to high temperatures. This explained why so many record high temperatures were recorded during the time those sensors were used. Note that the data has not been removed from the record, despite the fact that everybody acknowledges the problem.
In more cases there are local micro-climate and Urban Heat Island effects that affect the temperature readings. This web site has links to the weather station site surveys at Surfacestations trying to determine how bad the micro-climate effects are. My personal favorite was the official weather station surrounded by 23 window air conditioners, but there are also ones near barbeque grills and on tarmac and occasionally in the exhaust of commercial airliners.
The AGW advocates have been arguing for years that these effects either are negligible or have been properly corrected, with Phil Jones one of the strongest voices on the irrelevance of the UHI effect. Note that some of the emails are from Jones, acknowledging that one of his papers on UHI is affected by fraud by his coauthor.
CRU has also hidden the algorithms it uses to take temperature histories from weather stations and generates its gridded temperature anomaly data series. There is no way to know how the temperature data has been adjusted, manipulated, or mangled before it confessed.
The Register’s take on this:
“It does not bode well for a good conference at Copenhagen.”
Let them crack open a Carlsberg and bawl in their beer.
Here’s a great quote from the latest NYT article:
“‘This is not a smoking gun, this is a mushroom cloud,’ said Patrick J. Michaels, a climatologist …”
By the way, here is a suggested name for this scandal:
FOIAphobia Jones and the Meltdown Mann Hoax
It may be too long, but it could be box office gold.
On further review, the narrative of the letter does not fit the post’s surmise. I remember Mann, and RC running to this excuse for the “divergence” problem, but they don not do so now.
Mike’s ‘tricks’ are several, and smoothing functions are covered in Intermediate Statistics, the usual opening course for Math junkies, if not scientists.
Copy the original emails Send link viral.
PUT THEM IN JAIL NOW! Felony after Felony. This is not fake emails. You could not FAKE that much data. PERIOD.
Hold them accountable pass the link.
Well, my comment has been in “moderation” for over twelve hours (Granted he is terribly overloaded) at RC.() It was this:
[Begin original Comment]
[Response: Sure it can. TSI + volcanoes. – gavin]
On #280, what I meant is that something that “would” (but did not) happen cannot be measured. Is it your assertion that any warming not explained by natural drivers is automatically anthropogenic and there is no unexplained drivers? In other words is everything left over filed under “man caused” until further explained? I ask this because when unexpected cooling is observed, albeit not prolonged yet, It seems like it is never even considered that anthropogenic warming is overestimated, just that the cooling is unexplained.
It seems like an all out effort to preserve the estimated AGW. Bias is the concern..
I know you are very busy. Thank you for all the time you have taken.
[End of Comment]
I didn’t even add this exchange from file #1255553034.txt:
Here is Mann commenting on the unexpected cooling (after wondering why this “new” BBC reporter was even reporting the cooling. See file 1255558867.txt) :
]
Then Tom takes a look:
[Begin excerpt].
[End excerpt]
Of course Mann, the ringleader, comes back:
[Begin excerpt]
[End excerpt]
Finally Gavin chimes in:
[Begin excerpt]
[End excerpt]
They fail to see that you can’t just remove observed data, and draw on different comparisons to mask the models lack of ability to account for various natural drivers.
Are these more “Tricks”? I agree with Tom on the final reply:
[Begin excerpt]
Gavin,
I just think that you need to be up front with uncertainties
and the possibility of compensating errors.
Tom.
[End excerpt]
Shelly T. (04:39:02) :”
The alarmist site linked to the poster’s name states in the “about” section:
“There is zero tolerance on this site for those people and their lies, misinformation, and attempted obstruction of the truth that the public needs.”
This is a particularly nasty troll, but likely just a hit and run.
Is this where the “trick” is added in to the climate model? I’m used to simpler programming code!!!
;
trv=0 ; selects tree-ring-variable: 0=MXD 1=TRW 2=MXD-TRW
case trv of
0: fnadd=’mxd’
1: fnadd=’trw’
2: fnadd=’mxd-trw’
endcase
titadd=strupcase(fnadd)
;
; Get chronology locations
;
print,’Reading ‘+titadd+’ data’
if trv eq 2 then begin
restore,filename=’../alltrw.idlsave’
trw=mxd
restore,filename=’../allmxd.idlsave’
mxd=mxd-trw
endif else begin
restore,filename=’../all’+fnadd+’.idlsave’
; nchron,idno,idname,location,country,tree,yrstart,yrend,statlat,statlon,$
; mxd,fraction,timey,nyr
endelse
;
; Now read in the 1961-90 monthly means of precip and temperature
;
print,’Reading precipitation baseline’
restore,’/cru/u2/f055/data/obs/grid/surface/precip_new_19611990.idlsave’
; g,nmon,ltm5
pre6190=ltm5
print,’Reading temperature baseline’
restore,’/cru/u2/f055/data/obs/grid/surface/lat_new_19611990.idlsave’
; g,nmon,ltm5
lat6190=ltm5
;
; Now read in the land precipitation dataset.
; Although there is some missing data in Mark New’s precip anomalies, all
; MXD boxes have sufficient data, so do not have to use surrounding boxes.
;
print,’Reading precipitation data’
ncid=ncdf_open
<Theres more in the file, found; FOI2009.zip\FOIA\documents\osborn-tree6\datastore\examplets.pro
So much to try to understand!!
There has been a lot of speculation on the name of this scandal. Many want to call it “something-gate.” It is much too big to get that tired old suffix. Besides, it has a catchy name.
FOIA.ZIP
@Ron Cram (15:57:05) :
I know the “hide the decline” email is getting lots of notice (and it is noteworthy), but I am even more interested in the “delete the emails” instructions given. This looks to be evidence of conspiracy to obstruct. Where are the lawyers among us?
Agree… & AGW-FOIA.ZIP
Also,
This should all be Public Domain – public funded – no state secrets – etc. – we need Big Government to push this up to the top of the stack…
Gavin, Tim, Mike and the boyz have not told us where they hid the LIA or MWP… those two facts alone trash all else IMO… ( not to mention ICE AGES and subsequent warming )
Anthony – I think some are new to the WUWT site and may not be aware of the Viking Farms on Greenland – Also proxy data has its limits – ICE-CORES have CO2 migration issues which negate their validity – as does the tremendous variation of sampling, categorizing and documentation of the tree-rings data. All AGW is very shaky at best.
At least we can now see the entire house of cards collapsing – we need to hustle back into the US Supreme court to get CO2 declared plant food again –
“Prior to analysis, small gaps in proxy series during the latter part of the
calibration period (between 1972 and 1980) were filled by Mann et al (1998).”
Just saying, Mike’s Nature Trick may be important. That paper was published in ’98, email in ’99.
But 1972-1980 don’t look cold.
Of course, in the middle of the 70s, there was a significant concern about falling temps. Now why is that? There is only a difference of about 0.1C between 1940 and 1980 on my neat little chart.
Science is not a religion, unless your pimping science as your own personal whore. The whole thing is disgusting. Perpertrating a fraud on the human race as to purposely cause goverments to enslave and punitivley punish innocent untold millions upon millions people on this planet whose only crime was to be born breathing. These men are the best that Evil has to offer,For they have done wrong to one and all for reasons that they belive are justified by their religion that is but self evident to them such that being self righteous dictates. The Eco-Hitlers of our time.
There isn’t a very good reason to hide the recent global decline in temp. In fact a motive for hiding it is very unlikely to come from a scientist studying climate change. The global rise is looked at over several centuries and is not effected by little blips, it still shows a steady and persistent rise over the long time frame. There are several periods that show temp declining, they are on much shorter time frames and don’t change the long term rise being observed.
Get a life.
Now, wait a second… the “Nature Trick” looks perfectly reasonable to me.
If you have archaeologic/dendrologic data spanning a thousand years, but you have very precise temperature measurements over the last few decades, why not use the data? The ancient data is used over the time period we don’t have carefully measured data, while thermometer-recorded data is used over the period in which it was recorded. Simple.
The only “decline” I see is that shown where the ancient data fizzles out. Of course I would use my more precise and accurate data at that point! Only an idiot would use uncertain/inaccurate data at that point (I would describe my exact methods and reasons in the description and text).
Nikabok – It is scientifically invalid to append a data set to another data set if the 2 sets are not consistent. The fact that there was divergence between the data sets implies that the instrument data invalidated the archaeological/dendrological data set. Continuing to use the first data set is therefore invalid. Cherry picking results from the two data sets is scientifically fraudulent.
Holy Moly.
Thank again to Fox News (that media that isn’t a real news company per our leader) for airing this report or I would have never new about it. Just like the Acorn scandel.
The swines, no valid peer review and the make out like is nothing, their rabid follows just put their fingers in their ears and go la la la.
I wonder what legal action will result from the hacking and subsequent circulation of the documents and emails? The revelations from them don’t seem particularly *wow* to me.
Early on, Geo wrote that he typed in “tips and tricks” and got 12 million plus responses. DUH. “Tips” completely skews the context. The hacked Email mention of “trick” says nothing about “tips and tricks.” It says “Mike’s Nature TRICK… to HIDE the DECLINE”. When you search for “tips and tricks” you’re going to get a lot of information about video game solutions etc. You may has well have tossed in “tips and tricks for retards who are in denial.”
Those who received public grants and have fudged ( that is a term we BS detectors use all the time ) the results should be prosecuted (if there are still any honest prosecutors and judges) and put in prison. Let them sell fake herbal remedies and other snake oil medicines when they get out. How’s Obama and Gore reacting to the recent news?
Dave – “It is scientifically invalid to append a data set to another data set if the 2 sets are not consistent.”
The problem is that the two (actually four) data sets *are* consistent, up to the mid-20th century, where a proxy measure (derived from tree ring data) departs from *observed* data. So while you are right that you have to be careful in merging sets of data gathered under different circumstances, this is not the problem here.
The problem is that there’s something strange happening in the trees: perhaps they are actually *responding* to the warming climate by changing their growth patterns in a way that looks like they think the earth is cooler. Perhaps like a dog panting in the heat. That’s a pretty frightening thought: that the very thing that many people are today taking to be evidence that there is no warming climate could in fact be the trees’ way of saying “So long, and thanks for all the fish.”
The divergence of proxy from temperature that they were hiding was a real problem for them. They avoided by ignoring it. When they are trying to validate their tree ring proxies against real temperatures it is ludicrous to trunicate and substitute real temps.
The whole reason the need to “hide the decline” is because if you include the decline then the supposed temperature proxies, tree ring data, don’t correlate to measured temperatures. No correlation means that the data is not a valid proxy. So the whole point of the “trick” is to manufacture a spurious correlation, aka false correlation, and thus false proxy. One can then use the false proxy to generate false historical predictions of temperature. produced measurements that correlate to global average temperature but not to local temperature conditions. Your instrument can use whatever input it wants, rainfall, sunshine, mercury expansion, air pressure, whatever. Hell, even temperature.
You couldn’t design that if you wanted and certainly there is no selective pressure on trees to measure global temperatures, so evolution isn’t going to design one either.
What ever the methods used to measure it seems clear that they are all pointing to a serious problem for mankind.
Did I put you all to sleep?
So this is the “trick” hubbub? Sadly, the hubbub probably won’t die, as It is quite obvious that many, actually most, people have no idea how a computer creates a polynomial average curve from a data set. When a curve is created of a set of data points from an algorithm, the very ends of the trend line can be inaccurate as the end data points may be inappropriately amplified or incomplete….. especially in polynomials of multiple orders. In the above chart, the downward projection of the end of the green trend is caused by the premature end of the proxy data. In fact, it can be easily proved that the green curve is incorrect, as it differs from the actual recorded instrumental temperatures in the end. Anybody with more than even a limited knowledge of mathematics can see that the green line is an inaccurate depiction of the trending temperatures. As a result, any scientist would have to change their algorithms to CORRECT THIS MATHEMATICAL ERROR, which is what is seen in the purple trend. Any climate change skeptic that would look at the green trend, which is a fabricated guess, (as is the purple trend line, after all) and conclude that the trend shows cooling, when the ACTUAL recorded temperatures are going up, is showing what a laughable lack they have of mathematical knowledge in general. As for grafting instrumental data onto the end of the proxy series…. how else are you going to do it? Need I remind people that tree rings and ice cores, ahem, are not as accurate as instrumental readings. And you just don’t have instrumental readings from 50,000 BC through 1700. Sorry.
My goodness, JT,
Clever as you might be with mathematics, your English comprehension skills are in doubt. Just take a little look at Brian Macker’s post above yours.
Adding oranges to apples doesn’t inavalidate those fruits, it just makes lousy marmalade.
“How else are you going to do it?” you ask. Do what? Convince people that you know exactly what’s going on?
Another disgusting event.
But it´s great to read more than a thousand comments in just one entry.
Let´s spread the data and let each one make their own mind.
Peace to all
Here’s a great video.
The decline….. What decline are we talking about? Temperature, or the divergence in tree ring data? The video explains this comment IN CONTEXT. They’re talking about tree rings. See links below.
I’ve read a bunch of stuff here today and it really doesn’t change my view on the subject. In essence one side was able to come out on top and the others are just mad you weren’t on top.
Global warming, I don’t know, sure seems like it to me. The weather here is warmer than it was 30 yrs ago when I was a child. Are humans “The cause?” No, just like cigarettes aren’t “The cause cancer.” But don’t tell me cigarettes aren’t a contributing factor or bad for you!! As a former smoker I can say that I don’t get as many colds as before and I sure breath a lot easier now. Likewise don’t tell me that humans aren’t a contributing factor to the enviornment, we are! Look at LA, they have bad air day advisories. That IS from too much crap put into the air by humans!!
Also what is lost on all of you is the absolute fact that fossil fuels are finite. Which means sooner or later we have to find other sources of energy, which will take time to perfect, so why not sooner. Another fact lost on most of you is the 1000’s of products that use oil products; that’s right 1000’s. All those products go away when the finite amt of oil is used up. So let’s stop being wasteful, recycle and push for green energy.
Doug W. – every generation claims that they had it tougher than the subsequent generation (when I was a lad we walked barefoot to school through snow uphill both ways) and you prove your generation is no different. I have a picture of my father plowing fields in January in the early 1950’s in Maine, a feat that has not been possible since. I remember cold winters and warm winters in my youth (in the 50’s and 60’s) and I still see it today. Normally I would trust data over my experience, but now that I see how much the data is manipulated, I have to trust my experience and that tells me that it is as cold now as it was in my youth. Hopefully we will be able to get raw, unfiltered data in the future from people of integrity, but I fear that we will have to wait for scientists without political and funding agendas to come onto the scene.
You say Los Angeles is proof of man’s impact on the environment. There is some truth to this, however, the indians call the LA basin “the land of 10,000 smokes.” LA is geographically unsuitable for handling any form of pollution, whether from forest fires, power plants or automobiles with an on shore breeze trapping gases against a mountain barrier on the east side of the basin. Citing LA as an example of man’s damage to the environment is the sort of cherry picking that characterizes the folks on your side of the debate.
As far as when we should transition from fossil fuels to other forms of energy, all we need to do is watch the price signals in a free market (the wisdom of crowds). Of course your counter-argument is that you don’t trust markets but I certainly trust them more than your opinion.
All involved need to be thrown in prison and tried for treason, fraud, theft, and being very stupid. None deserve to be called “scientist” – they are all hacks – and not very good ones either. | http://wattsupwiththat.com/2009/11/20/mikes-nature-trick/ | CC-MAIN-2016-07 | refinedweb | 21,046 | 71.55 |
Since DOM
is becoming the interface of choice in the Perl-XML world, it
deserves more elaboration. The following sections describe class
interfaces individually, listing their properties, methods, and
intended purposes.
WARNING:.
The Document class controls
the overall document, creating new objects when requested and
maintaining high-level information such as references to the document
type declaration and the root element.
Document Type Declaration (DTD).
The root element of the document.
Generates a new node object.
Generates a new element or attribute node object with a specified
namespace qualifier.
Creates a container object for a document's subtree.
Returns a NodeList of all elements having a given tag name at any
level of the document.
Returns a NodeList of all elements having a given namespace qualifier
and local name. The asterisk character (*) matches any element or any
namespace, allowing you to find all elements in a given namespace.
Returns a reference to the node that has a specified ID attribute.
Creates a new node that is the copy of a node from another document.
Acts like a "copy to the clipboard"
operation for importing markup..)
No specific methods or properties are defined; use the generic node
methods to access data.).
The name of the root element.
A NamedNodeMap of entity declarations.
A NamedNodeMap of notation declarations.
The internal subset of the DTD represented as a string.
The external subset of the DTD's public identifier.
The external subset of the DTD's system identifier..
A property that is defined for elements, attributes, and entities. In
the context of elements this property would be the
tag's name.
A property defined for attributes, text nodes, CDATA nodes, PIs, and
One of the following types of nodes: Element,
Attr, Text,
CDATASection, EntityReference,
Entity, ProcessingInstruction,
Comment, Document,
DocumentType, DocumentFragment,
or Notation.
A reference to the parent of this node.
An ordered list of references to children of this node (if any).
References to the first and last of the node's
children (if any).
The node immediately preceding or following this one, respectively.
An unordered list (NamedNodeMap) of nodes that are
attributes of this one (if any).
A reference to the object containing the whole document -- useful
when you need to generate a new node.
A namespace URI if this node has a namespace prefix; otherwise it is
null.
The namespace prefix associated with this node.
Inserts a node before a reference child element.
Swaps a child node with a new one you supply, giving you the old one
in return.
Adds a new node to the end of this node's list of
children.
True if there are children of this node; otherwise, it is false..
Returns true if this node has defined attributes.
Returns true if this implementation supports a specific feature.
This class is a container for an ordered list
of nodes. It is "live," meaning
that any changes to the nodes it references will appear in the
document immediately.
Returns an integer indicating the number of nodes in the list.
Given an integer value n, returns a
reference to the nth node in the list,
starting at zero.
This unordered set of nodes is designed to
allow access to nodes by name. An alternate access by index is also
provided for enumerations, but no order is implied.
Retrieves or adds a node using the node's
nodeName property as the key.
Takes a node with the specified name out of the set and returns it.
Given an integer value n, returns a
reference to the nth node in the set. Note
that this method does not imply any order and is provided only for
unique enumeration.
Retrieves a node based on a namespace-qualified name (a namespace
prefix and local name).
Takes an item out of the list and returns it, based on its
namespace-qualified name.
Adds a node to the list using its namespace-qualified name.
This class extends Node to
facilitate access to certain types of nodes that contain character
data, such as Text,
CDATASection, Comment, and
ProcessingInstruction. Specific classes like
Text inherit from this class.
The character data itself.
The number of characters in the data.
Appends a string of character data to the end of the
data property.
Extracts and returns a segment of the data
property from offset to
offset + count.
Inserts a string inside the data property at the
location given by offset.
Sets the data property to an empty string.
Changes the contents of data property with a new
string that you provide.
This is the most common type of node you will encounter. An
element can contain other nodes and has attribute nodes.
The name of the element.
Returns the value of an attribute, or a reference to the attribute
node, with a given name.
Adds a new attribute to the element's list or
replaces an existing attribute of the same name.
Returns the value of an attribute and removes it from the
element's list.
Returns a NodeList of descendant elements who
match a name.
Collapses adjacent text nodes. You should use this method whenever
you add new text nodes to ensure that the structure of the document
remains the same, without erroneous extra children.
Retrieves an attribute value based on its qualified name (the
namespace prefix plus the local name).
Gets an attribute's node by using its qualified
name.
Returns a NodeList of elements among this
element's descendants that match a qualified name.
Returns true if this element has an attribute with a given name.
Returns true if this element has an attribute with a given qualified
name.
Removes and returns an attribute node from this
element's list, based on its namespace-qualified
name.
Adds a new attribute to the element's list, given a
namespace-qualified name and a value.
Adds a new attribute node to the element's list with
a namespace-qualified name.
The attribute's name.
If the program or the document explicitly set the attribute, this
property is true. If it was set in the DTD as a default and not reset
anywhere else, then it will be false.
The attribute's value, represented as a text node.
The element to which this attribute belongs..
CDATA Section is like a text
node, but protects its contents from being parsed. It may contain
markup characters (<, &) that would be illegal in text nodes.
Use generic Node methods to access data.
The target value for the node.
The data value for the node.
This is a class representing comment nodes. Use the
generic Node methods to access the data..
This class provides access to an entity in the document, based on
information in an entity declaration in the DTD.
A public identifier for the resource (if the entity is external to
the document).
A system identifier for the resource (if the entity is external to
the document).
If the entity is unparsed, its notation reference is listed here.
Notation represents a notation declaration
appearing in the DTD.
A public identifier for the notation.
A system identifier for the notation. | https://docstore.mik.ua/orelly/perl3/pxml/ch07_02.htm | CC-MAIN-2020-05 | refinedweb | 1,177 | 59.3 |
- Update Recovery to TWRP 3.2.0
- Boot into Recovery
- Factory Reset (only needed if you are not already on Android 7.x AOSP)
- Install 7.1.2 Grouper OTA-Package (Build 20180209) (md5: af699b7be25d7c6bccfd09b6f0ca1fb3)
or
- Install 7.1.2 Tilapia OTA-Package (Build 20180209) (md5: 19986b1223c6c497769109b0413005b6)
- Reboot into recovery
- Install Gapps for 7.1.x (I used BeansGapps-Mini-7.1.x-20170725.zip), be sure they fit into the remaining space in /system if you use a different package(!)
- To get root access, I suggest to use Magisk V15.3
- Wipe dalvik/cache
- Reboot
Important: For more detailed installation instructions please refer to the next post
Important: First boot after upgrading will take a bit longer (stays on ANDROID) because of art optimizations
Important: Magisk hide does not work as it does not support kernels without mount namespace support
flash and use on your own risk!
CREDITS,
AOSP/Google, LineageOS, Ziyann for setting up the Grouper-AOSP repository and his Unlegacy Android project, daniel_hk for some hints how to get the 3.1 kernel running with N, timduru for his work on the Asus Transformer, Timur Mehrvarz for his work on the tegra kernel, Francisco Franco for his kernel work in general
Bugs:
- due to the lack of a Tilapia device, I was not able to test the build, please report issues if you find some
- mirroring to a Chromecast device is working, but disconnecting by using the notification is blocking sometimes the device. Workaround: use the Home.app from google to disconnect the casting.
- probably some more
Latest Changes
- 201780209 (AOSP 7.1.2)
- February security patches applied
- fix screeen unlocking issues (thanks to @Charles IV)
- huge amount of kernel changes
- backport of linux kernel namespace support (-> Magisk-hide is now able to work properly)
- backport of linux kernel seccomp support (-> mediaextractor and mediacodec processes are running in a sandbox)
- backport of sdcardfs support (currently still disabled, but can be enabled easily by changing the build.prop file)
- backport of vmpressure and adaptive low memory killer
- many other security patches and bug fixes
Changelog
- 201780112 (AOSP 7.1.2)
- January security patches applied
- LP overclocking disabled
- 20171208 (AOSP 7.1.2)
- December security patches applied
- 20171111 (AOSP 7.1.2)
- November security patches applied
- intelliactive performance tuning
- 20171018 (AOSP 7.1.2)
- further performance optimizations
- intelliplug disabled again
- KRACK-attack fixes
- 20171010 (AOSP 7.1.2)
- android-7.1.2_r33 (october security fixes manually backported due to missing 7.1.2 updates from Google)
- slow charging bug fixed (but OTG-charging removed :( )
- further minor performance optimizations
- intelliplug re-enabled (please report issues if you find some)
- 20170923 (AOSP 7.1.2)
- android-7.1.2_r33 (september security fixes manually backported due to missing 7.1.2 updates from Google)
- patches for correct calculation of free memory when zram is enabled
- performance optimizations to fix the lagging
- intelliplug disabled for now due to kernel oops
- sdcard_fs support added to kernel
- 20170811 (AOSP 7.1.2)
- update to android-7.1.2_r33 (security fixes applied)
- low memory killer adjusted to be more agressive
- many changes to the kernel, the most important ones:
- LP core overdrive up to 620 Mhz
- LP1 undervolting to 0.95V
- intelliplug from @faux123 (default setting: balanced profile)
- adjustments to the intelli_active frequency settings
- 20170707 (AOSP 7.1.2)
- update to android-7.1.2_r27
- minor improvements to intelliactive governor
- 20170608 (AOSP 7.1.2)
- update to android-7.1.2_r16
- changed default governor to intelliactive
- 20170505 (AOSP 7.1.2)
- update to android-7.1.2_r10
- backported patches included into the kernel to fix security issues (CVE-2017-7184, ...)
- libnvos reverted to the unpatched version, while libGLESv1_CM_tegra.so and libGLESv2_CM_tegra.so are replaced by a patched version to eliminate the need of a shim lib (thanks to @Ziyan , @sheffzor and @csk1jw)
- 20170410 (AOSP 7.1.2)
- NFC fixed (was an issue for Grouper only)
- SetupWizard crashes hopefully fixed, too
- 20170409 (AOSP 7.1.2)
- video crashes fixed
- 20170407 (AOSP 7.1.2)
- new release based on 7.1.2_r5, April 2017 security fixes applied
- huge changes / updates in the kernel to improve performance and battery life time
- PerformanceControl application added (no further need to use Kernel Adiutor)
- sudden appplication closes fixed
- 20170308 (AOSP 7.1.1)
- new release based on 7.1.1_r25, March 2017 security fixes applied
- several changes in the kernel
- zRAM 200MB default enabled
- lz4 compression algorithm for ramdisk
- optimizations for interactive governor
- several optimizations
- usb hostmode changing patch fixed
- 20170207 (AOSP 7.1.1)
- new release based on 7.1.1_r20, February 2017 security fixes applied
- DRM issues (hopefully) fixed
- several changes in the kernel
- compiler changed from gcc-4.9 back to gcc-4.8
- several security patches
- several optimizations
- usb hostmode changing patch added
- ota update script modified so that /system is formatted before applying a FULL-OTA image
- 20170105 (AOSP 7.1.1)
- new release based on 7.1.1_r9, January 2017 security fixes applied
- 20161213 (AOSP 7.1.1)
- new release still based on 7.1.1_r4
- tilapia ril issues fixed (thanks to @millosr)
- Add CUSTOM_BRIGHTNESS support (used the patch from DC-kernel, thanks to @daniel_hk)
- camera2 fixed (thanks to @aaopt)
- missing WallpaperPicker project added
- using widevine libs from Unlegacy (thanks to @Ziyan)
- 20161207 (AOSP 7.1.1)
- new release based on 7.1.1_r4
- Music app no longer crashing
- stabilzation, almost no FCs
- Kernel security patch (CVE-2016-8655) applied
- 20161127 (AOSP 7.1)
- new release, first release for Tilapia based on 7.1.0_r5 (thanks to @millosr for providing the tilapia device tree)
- camera working now
- Music app no longer crashing
- overall more stable and much less FCs
- 20161109 (AOSP 7.1)
- new release, based on 7.1.0_r5
- security patch for November applied
- several security fixes in kernel
- 20161028 (AOSP 7.1)
- new release, based on 7.1.0_r4
- security patch for October applied
- SELinux in enforcing mode now
- Dirty COW security patch applied
- integration of performace tweaks based on ParrodMod, thanks to @parrotgeek1
- 20161019
- new release, based on 7.0.0_r14
- security patch for October applied
- minor bugfixes (NFC, bluetooth, ...)
- 20160923
- initial release, based on 7.0.0_r6
Downloads
Version from 12/01/2018 (7.1.2)
Grouper OTA-Package 20180112
Tilapia OTA-Package 20180112
Version from 08/12/2017 (7.1.2)
Grouper OTA-Package 20171208
Tilapia OTA-Package 20171208
Version from 11/11/2017 (7.1.2)
Grouper OTA-Package 20171111
Tilapia OTA-Package 20171111
Version from 10/18/2017 (7.1.2)
Grouper OTA package 20171018
Tilapia OTA package 20171018
Version from 08/11/2017 (7.1.2)
Grouper OTA package 20170811
Tilapia OTA package 20170811
Version from 07/07/2017 (7.1.2)
Grouper OTA package 20170707
Tilapia OTA package 20170707
Version from 06/08/2017 (7.1.2)
Grouper OTA package 20170608
Tilapia OTA package 20170608
Version from 05/08/2017 (7.1.2)
Grouper OTA package 20170505
Tilapia OTA package 20170507
Version from 04/10/2017 (7.1.2)
Grouper OTA package 20170410
Tilapia OTA package 20170410
Version from 03/08/2017 (7.1.x)
Grouper OTA package 20170308
Tilapia OTA package 20170308
Version from 02/07/2017 (7.1.x)
Grouper OTA package 20170207
Tilapia OTA package 20170207
Version from 01/05/2017 (7.1.x)
Grouper OTA package 20170105
Tilapia OTA package 20170105
Version from 12/13/2016 (7.1.x)
Grouper OTA package 20161213
Tilapia OTA package 20161213
Version from 12/07/2016 (7.1.x)
Grouper OTA package 20161207
Tilapia OTA package 20161207
Version from 11/27/2016 (7.1.x)
Grouper OTA package 20161127
Tilapia OTA package 20161127
Version from 11/09/2016 (7.1.x)
OTA package 20161109
Version from 10/19/2016
OTA package 20161019
Version from 09/23/2016
OTA package 20160923
Sources
If you would like to build from the sources, you can do so by cloning and build from repositories:
- repo init -u -b ads-7.1.0
- repo sync (... and go out for lunch ...)
- . build/envsetup.sh
- lunch 7 / 8 / 9
- make / make otapackage
XDA:DevDB Information
Android 7.x AOSP, ROM for the Nexus 7
Contributors
AndDiSa
Source Code:
ROM OS Version: 7.x Nougat
ROM Kernel: Linux 3.1.x
Based On: AOSP
Version Information
Status: Beta
Created 2016-09-23
Last Updated 2018-02-09 | https://forum.xda-developers.com/nexus-7/development/rom-android-7-aosp-grouper-t3467514 | CC-MAIN-2018-09 | refinedweb | 1,382 | 57.16 |
Introduction :
Loading a gif from URL is easy in react native. React native provides one Image component that can load gifs from a URL. In this tutorial, I will show you how to load one gif from a remote URL. I will provide you the full working code at the end of this post.
For this example, I am using one gif from giphy: You can replace it with any other URL if you want.
For loading a gif, the Image component is used. import it from react-native :
import {Image} from 'react-native';
Store the URL in a constant and assign that value as a source to the Image :
const imageUrl = ''; <Image source={{uri: imageUrl}} style={styles.image} />
It looks as like below on iPhone :
Loading gif in android is different. Android can’t load gif automatically. For that you need to add one library. Open android/app/build.gradle file and add the below line inside dependencies block :
implementation 'com.facebook.fresco:animated-gif:2.0.0'
For android version before Ice cream sandwich(API 14), use it :
implementation 'com.facebook.fresco:animated-base-support:1.3.0'
Most app doesn’t support before API 14, so only the first line will work.
You can check the official guide to find out the latest version of this lib.
Rerun the app and here is the result :
| https://www.codevscolor.com/react-native-load-gif-url | CC-MAIN-2021-04 | refinedweb | 228 | 66.23 |
List All Available Redis Keys
Last modified: January 24, 2020
Get started with Spring 5 and Spring Boot 2, through the Learn Spring course:>> CHECK OUT THE COURSE
1. Overview
Collections are an essential building block typically seen in almost all modern applications. So, it's no surprise that Redis offers a variety of popular data structures such as lists, sets, hashes, and sorted sets for us to use.
In this tutorial, we'll learn how we can effectively read all available Redis keys that match a particular pattern.
2. Explore Collections
Let's imagine that our application uses Redis to store information about balls used in different sports. We should be able to see information about each ball available from the Redis collection. For simplicity, we'll limit our data set to only three balls:
- Cricket ball with a weight of 160 g
- Football with a weight of 450 g
- Volleyball with a weight of 270 g
As usual, let's first clear our basics by working on a naive approach to exploring Redis collections.
3. Naive Approach Using redis-cli
Before we start writing Java code to explore the collections, we should have a fair idea of how we'll do it using the redis-cli interface. Let's assume that our Redis instance is available at 127.0.0.1 on port 6379, for us to explore each collection type with the command-line interface.
3.1. Linked List
First, let's store our data set in a Redis linked list named balls in the format of sports-name_ball-weight with the help of the rpush command:
% redis-cli -h 127.0.0.1 -p 6379 127.0.0.1:6379> RPUSH balls "cricket_160" (integer) 1 127.0.0.1:6379> RPUSH balls "football_450" (integer) 2 127.0.0.1:6379> RPUSH balls "volleyball_270" (integer) 3
We can notice that a successful insertion into the list outputs the new length of the list. However, in most cases, we'll be blind to the data insertion activity. As a result, we can find out the length of the linked list using the llen command:
127.0.0.1:6379> llen balls (integer) 3
When we already know the length of the list, it's convenient to use the lrange command to retrieve the entire data set easily:
127.0.0.1:6379> lrange balls 0 2 1) "cricket_160" 2) "football_450" 3) "volleyball_270"
3.2. Set
Next, let's see how we can explore the data set when we decide to store it in a Redis set. To do so, we first need to populate our data set in a Redis set named balls using the sadd command:
127.0.0.1:6379> sadd balls "cricket_160" "football_450" "volleyball_270" "cricket_160" (integer) 3
Oops! We had a duplicate value in our command. But, since we were adding values to a set, we don't need to worry about duplicates. Of course, we can see the number of items added from the output response-value.
Now, we can leverage the smembers command to see all the set members:
127.0.0.1:6379> smembers balls 1) "volleyball_270" 2) "cricket_160" 3) "football_450"
3.3. Hash
Now, let's use Redis's hash data structure to store our dataset in a hash key named balls such that hash's field is the sports name and the field value is the weight of the ball. We can do this with the help of hmset command:
127.0.0.1:6379> hmset balls cricket 160 football 450 volleyball 270 OK
To see the information stored in our hash, we can use the hgetall command:
127.0.0.1:6379> hgetall balls 1) "cricket" 2) "160" 3) "football" 4) "450" 5) "volleyball" 6) "270"
3.4. Sorted Set
In addition to a unique member-value, sorted-sets allows us to keep a score next to them. Well, in our use case, we can keep the name of the sport as the member value and the weight of the ball as the score. Let's use the zadd command to store our dataset:
127.0.0.1:6379> zadd balls 160 cricket 450 football 270 volleyball (integer) 3
Now, we can first use the zcard command to find the length of the sorted set, followed by the zrange command to explore the complete set:
127.0.0.1:6379> zcard balls (integer) 3 127.0.0.1:6379> zrange balls 0 2 1) "cricket" 2) "volleyball" 3) "football"
3.5. Strings
We can also see the usual key-value strings as a superficial collection of items. Let's first populate our dataset using the mset command:
127.0.0.1:6379> mset balls:cricket 160 balls:football 450 balls:volleyball 270 OK
We must note that we added the prefix “balls:” so that we can identify these keys from the rest of the keys that may be lying in our Redis database. Moreover, this naming strategy allows us to use the keys command to explore our dataset with the help of prefix pattern matching:
127.0.0.1:6379> keys balls* 1) "balls:cricket" 2) "balls:volleyball" 3) "balls:football"
4. Naive Java Implementation
Now that we have developed a basic idea of the relevant Redis commands that we can use to explore collections of different types, it's time for us to get our hands dirty with code.
4.1. Maven Dependency
In this section, we'll be using the Jedis client library for Redis in our implementation:
<dependency> <groupId>redis.clients</groupId> <artifactId>jedis</artifactId> <version>3.2.0</version> </dependency>
4.2. Redis Client
The Jedis library comes with the Redis-CLI name-alike methods. However, it's recommended that we create a wrapper Redis client, which will internally invoke Jedis function calls.
Whenever we're working with Jedis library, we must keep in mind that a single Jedis instance is not thread-safe. Therefore, to get a Jedis resource in our application, we can make use of JedisPool, which is a threadsafe pool of network connections.
And, since we don't want multiple instances of Redis clients floating around at any given time during the life cycle of our application, we should create our RedisClient class on the principle of the singleton design pattern.
First, let's create a private constructor for our client that'll internally initialize the JedisPool when an instance of RedisClient class is created:
private static JedisPool jedisPool; private RedisClient(String ip, int port) { try { if (jedisPool == null) { jedisPool = new JedisPool(new URI("http://" + ip + ":" + port)); } } catch (URISyntaxException e) { log.error("Malformed server address", e); } }
Next, we need a point of access to our singleton client. So, let's create a static method getInstance() for this purpose:
private static volatile RedisClient instance = null; public static RedisClient getInstance(String ip, final int port) { if (instance == null) { synchronized (RedisClient.class) { if (instance == null) { instance = new RedisClient(ip, port); } } } return instance; }
Finally, let's see how we can create a wrapper method on top of Jedis's lrange method:
public List lrange(final String key, final long start, final long stop) { try (Jedis jedis = jedisPool.getResource()) { return jedis.lrange(key, start, stop); } catch (Exception ex) { log.error("Exception caught in lrange", ex); } return new LinkedList(); }
Of course, we can follow the same strategy to create the rest of the wrapper methods such as lpush, hmset, hgetall, sadd, smembers, keys, zadd, and zrange.
4.3. Analysis
All the Redis commands that we can use to explore a collection in a single go will naturally have an O(n) time complexity in the best case.
We are perhaps a bit liberal, calling this approach as naive. In a real-life production instance of Redis, it's quite common to have thousands or millions of keys in a single collection. Further, Redis's single-threaded nature brings more misery, and our approach could catastrophically block other higher-priority operations.
So, we should make it a point that we're limiting our naive approach to be used only for debugging purposes.
5. Iterator Basics
The major flaw in our naive implementation is that we're requesting Redis to give us all of the results for our single fetch-query in one go. To overcome this issue, we can break our original fetch query into multiple sequential fetch queries that operate on smaller chunks of the entire dataset.
Let's assume that we have a 1,000-page book that we're supposed to read. If we follow our naive approach, we'll have to read this large book in a single sitting without any breaks. That'll be fatal to our well-being as it'll drain our energy and prevent us from doing any other higher-priority activity.
Of course, the right way is to finish the book over multiple reading sessions. In each session, we resume from where we left off in the previous session — we can track our progress by using a page bookmark.
Although the total reading time in both cases will be of comparable value, nonetheless, the second approach is better as it gives us room to breathe.
Let's see how we can use an iterator-based approach for exploring Redis collections.
6. Redis Scan
Redis offers several scanning strategies to read keys from collections using a cursor-based approach, which is, in principle, similar to a page bookmark.
6.1. Scan Strategies
We can scan through the entire key-value collection store using the Scan command. However, if we want to limit our dataset by collection types, then we can use one of the variants:
- Sscan can be used for iterating through sets
- Hscan helps us iterate through pairs of field-value in a hash
- Zscan allows an iteration through members stored in a sorted set
We must note that we don't really need a server-side scan strategy specifically designed for the linked lists. That's because we can access members of the linked list through indexes using the lindex or lrange command. Plus, we can find out the number of elements and use lrange in a simple loop to iterate the entire list in small chunks.
Let's use the SCAN command to scan over keys of string type. To start the scan, we need to use the cursor value as “0”, matching pattern string as “ball*”:
127.0.0.1:6379> mset balls:cricket 160 balls:football 450 balls:volleyball 270 OK 127.0.0.1:6379> SCAN 0 MATCH ball* COUNT 1 1) "2" 2) 1) "balls:cricket" 127.0.0.1:6379> SCAN 2 MATCH ball* COUNT 1 1) "3" 2) 1) "balls:volleyball" 127.0.0.1:6379> SCAN 3 MATCH ball* COUNT 1 1) "0" 2) 1) "balls:football"
With each completed scan, we get the next value of cursor to be used in the subsequent iteration. Eventually, we know that we've scanned through the entire collection when the next cursor value is “0”.
7. Scanning With Java
By now, we have enough understanding of our approach that we can start implementing it in Java.
7.1. Scanning Strategies
If we peek into the core scanning functionality offered by the Jedis class, we'll find strategies to scan different collection types:
public ScanResult<String> scan(final String cursor, final ScanParams params); public ScanResult<String> sscan(final String key, final String cursor, final ScanParams params); public ScanResult<Map.Entry<String, String>> hscan(final String key, final String cursor, final ScanParams params); public ScanResult<Tuple> zscan(final String key, final String cursor, final ScanParams params);
Jedis requires two optional parameters, search-pattern and result-size, to effectively control the scanning – ScanParams makes this happen. For this purpose, it relies on the match() and count() methods, which are loosely based on the builder design pattern:
public ScanParams match(final String pattern); public ScanParams count(final Integer count);
Now that we've soaked in the basic knowledge about Jedis's scanning approach, let's model these strategies through a ScanStrategy interface:
public interface ScanStrategy<T> { ScanResult<T> scan(Jedis jedis, String cursor, ScanParams scanParams); }
First, let's work on the simplest scan strategy, which is independent of the collection-type and reads the keys, but not the value of the keys:
public class Scan implements ScanStrategy<String> { public ScanResult<String> scan(Jedis jedis, String cursor, ScanParams scanParams) { return jedis.scan(cursor, scanParams); } }
Next, let's pick up the hscan strategy, which is tailored to read all the field keys and field values of a particular hash key:
public class Hscan implements ScanStrategy<Map.Entry<String, String>> { private String key; @Override public ScanResult<Entry<String, String>> scan(Jedis jedis, String cursor, ScanParams scanParams) { return jedis.hscan(key, cursor, scanParams); } }
Finally, let's build the strategies for sets and sorted sets. The sscan strategy can read all the members of a set, whereas the zscan strategy can read the members along with their scores in the form of Tuples:
public class Sscan implements ScanStrategy<String> { private String key; public ScanResult<String> scan(Jedis jedis, String cursor, ScanParams scanParams) { return jedis.sscan(key, cursor, scanParams); } } public class Zscan implements ScanStrategy<Tuple> { private String key; @Override public ScanResult<Tuple> scan(Jedis jedis, String cursor, ScanParams scanParams) { return jedis.zscan(key, cursor, scanParams); } }
7.2. Redis Iterator
Next, let's sketch out the building blocks needed to build our RedisIterator class:
- String-based cursor
- Scanning strategy such as scan, sscan, hscan, zscan
- Placeholder for scanning parameters
- Access to JedisPool to get a Jedis resource
We can now go ahead and define these members in our RedisIterator class:
private final JedisPool jedisPool; private ScanParams scanParams; private String cursor; private ScanStrategy<T> strategy;
Our stage is all set to define the iterator-specific functionality for our iterator. For that, our RedisIterator class must implement the Iterator interface:
public class RedisIterator<T> implements Iterator<List<T>> { }
Naturally, we are required to override the hasNext() and next() methods inherited from the Iterator interface.
First, let's pick the low-hanging fruit – the hasNext() method – as the underlying logic is straight-forward. As soon as the cursor value becomes “0”, we know that we're done with the scan. So, let's see how we can implement this in just one-line:
@Override public boolean hasNext() { return !"0".equals(cursor); }
Next, let's work on the next() method that does the heavy lifting of scanning:
@Override public List next() { if (cursor == null) { cursor = "0"; } try (Jedis jedis = jedisPool.getResource()) { ScanResult scanResult = strategy.scan(jedis, cursor, scanParams); cursor = scanResult.getCursor(); return scanResult.getResult(); } catch (Exception ex) { log.error("Exception caught in next()", ex); } return new LinkedList(); }
We must note that ScanResult not only gives the scanned results but also the next cursor-value needed for the subsequent scan.
Finally, we can enable the functionality to create our RedisIterator in the RedisClient class:
public RedisIterator iterator(int initialScanCount, String pattern, ScanStrategy strategy) { return new RedisIterator(jedisPool, initialScanCount, pattern, strategy); }
7.3. Read With Redis Iterator
As we've designed our Redis iterator with the help of the Iterator interface, it's quite intuitive to read the collection values with the help of the next() method as long as hasNext() returns true.
For the sake of completeness and simplicity, we'll first store the dataset related to the sports-balls in a Redis hash. After that, we'll use our RedisClient to create an iterator using Hscan scanning strategy. Let's test our implementation by seeing this in action:
@Test public void testHscanStrategy() { HashMap<String, String> hash = new HashMap<String, String>(); hash.put("cricket", "160"); hash.put("football", "450"); hash.put("volleyball", "270"); redisClient.hmset("balls", hash); Hscan scanStrategy = new Hscan("balls"); int iterationCount = 2; RedisIterator iterator = redisClient.iterator(iterationCount, "*", scanStrategy); List<Map.Entry<String, String>> results = new LinkedList<Map.Entry<String, String>>(); while (iterator.hasNext()) { results.addAll(iterator.next()); } Assert.assertEquals(hash.size(), results.size()); }
We can follow the same thought process with little modification to test and implement the remaining strategies to scan and read the keys available in different types of collections.
8. Conclusion
We started this tutorial with an intention to learn about how we can read all the matching keys in Redis.
We found out that there is a simple way offered by Redis to read keys in one go. Although simple, we discussed how this puts a strain on the resources and is therefore not suitable for production systems. On digging deeper, we came to know that there's an iterator-based approach for scanning through matching Redis keys for our read-query.
As always, the complete source code for the Java implementation used in this article is available over on GitHub. | https://www.baeldung.com/redis-list-available-keys | CC-MAIN-2021-17 | refinedweb | 2,780 | 59.53 |
Hi everyone.
I'm trying to submit my project, but I receive an e-mail which says "[Contest] Submission Failure".
In the specifications it is written:
Pathfinder.cs(5,14): error CS0234: The type or namespace name `Drawing' does not exist in the namespace `System'. Are you missing an assembly reference?
Pathfinder.cs(280,32): error CS0246: The type or namespace name `Point' could not be found. Are you missing a using directive or an assembly reference?
Failed, output file MyTronBot.exe was not created
I add a "using System.Drawing" in all the clases..
However, when I compile my solution in the Visual Studio it works without any problem!
I'm developing in C#.
Anybody has idea what is happening?
Thanks! | http://forums.aichallenge.org/viewtopic.php?f=9&t=221 | CC-MAIN-2018-26 | refinedweb | 122 | 62.04 |
A deep dive in the Vue.js source code (#12): the generateComponentTrace function
If you are just joining, this is the 12th post in a series going over the entire Vue.js source code line by line. In this post, we look at the
generateComponentTrace function.
If you recall from our last post, the
warn function calls a function named
generateComponentTrace to set a variable named
trace if a Vue instance is passed to the warning:
warn = function (msg, vm) {
var trace = vm ? generateComponentTrace(vm) : '';
In the last post, we glossed over the implementation details of
generateComponentTrace to focus on the logic of the
warn function. In this post, we loop back around to the
generateComponentTrace function.
By default,
generateComponentTrace is a function that does nothing:
function noop (a, b, c) {}
[. . . .]
var generateComponentTrace = (noop); // work around flow check
But if the environment is not a production environment, generateComponentTrace is set to a function that does some work:
if (process.env.NODE_ENV !== 'production') {
[. . . .]
generateComponentTrace = function (vm) {
[. . . .]
}
}
The function takes a parameter:
generateComponentTrace = function (vm) {
[. . . .]
}
And checks whether it is a Vue instance:
if (vm._isVue && vm.$parent) {
[. . . .]
} else {
return ("\n\n(found in " + (formatComponentName(vm)) + ")")
}
You will recall that the
._isVue property is set to true within the
Vue.prototype._init method.
function initMixin (Vue) {
Vue.prototype._init = function (options) {
[. . . .]
// a flag to avoid this being observed
vm._isVue = true;
[. . . .]
}
}
generateComponentTrace next checks whether
vm.$parent coerces to true.
Since this is our first introduction to
vm.$parent, let’s take a closer look at it. Here, the Vue API is helpful. The API explains that
vm.$parent is a property on a Vue instance and refers to the “The parent instance, if the current instance has one.” When parent is set as a property, the API explains that it:
Establishes a parent-child relationship between the two. The parent will be accessible as
this.$parentfor the child, and the child will be pushed into the parent’s
$childrenarray.
We will take an even deeper look at the parent-child relationship and how the parent becomes accessible as
$parent when we discuss the
initLifecycle function. For our purposes in this post, we can move forward just knowing that the if statement is checking whether the Vue instance has a
$parent property.
If both conditions are met,
generateComponentTrace initializes a variable
tree and sets it to an empty array:
var tree = [];
And then sets a variable
currentRecursiveSequence to 0:
var currentRecursiveSequence = 0;
The beginning of the while loop may appear a little odd:
while (vm) {
[. . . .]
How does the loop quit? How does
vm ever become anything other than
vm? Look at the end of the loop for the answer. At the end of each loop, the Vue instance is pushed in to the
tree array and
vm is reset to the parent of the current instance:
while (vm) {
[ . . . .]
tree.push(vm);
vm = vm.$parent;
}
In other words, the loop works recursively up through the Vue instances through their
.$parent properties and exits when an instance does not have a
.$parent property.
So let’s turn back to the start of the loop. The loop first checks whether the length of the tree array is greater than 0:
if (tree.length > 0) {
var last = tree[tree.length - 1];
if (last.constructor === vm.constructor) {
currentRecursiveSequence++;
vm = vm.$parent;
continue
} else if (currentRecursiveSequence > 0) {
tree[tree.length - 1] = [last, currentRecursiveSequence];
currentRecursiveSequence = 0;
}
}
tree.push(vm);
vm = vm.$parent;
If the length of the
tree array is greater than 0, the variable
last is set to the last element in the array. Since arrays in Javascript are zero based — meaning that the array index starts at 0 — the length of an array is always one more than the last element in an array. Thus, you have to set last to the element of the tree array at an index of the length of the array minus one:
var last = tree[tree.length - 1];
Next, we check whether
last.constructor is strictly equal to
vm.constructor:
if (last.constructor === vm.constructor) {
currentRecursiveSequence++;
vm = vm.$parent;
continue
} else if (currentRecursiveSequence > 0) {
tree[tree.length - 1] = [last, currentRecursiveSequence];
currentRecursiveSequence = 0;
}
If so, the
currentRecursiveSequence variable is incremented:
currentRecursiveSequence++;
And the Vue instance is set to its parent property:
vm = vm.$parent;
And the continue statement “terminates execution of the statements in the current iteration of the current [] loop, and continues execution of the loop with the next iteration.”
continue
In other words, the following is not called after continue:
tree.push(vm);
vm = vm.$parent;
If
last.constructor is not strictly equal to
vm.constructor and
currentRecursiveSequence is greater than 0, the last element of the
tree is set to an array with two elements:
last and
currentRecursiveSequence. And then currentRecursiveSequence is set to 0:
} else if (currentRecursiveSequence > 0) {
tree[tree.length - 1] = [last, currentRecursiveSequence];
currentRecursiveSequence = 0;
}
Finally, the Vue instance is pushed onto the tree and — as discussed above — the Vue instance is reset to the Vue instance’s
.$parent property:
tree.push(vm);
vm = vm.$parent;
And that is the end of the while loop. It runs until you reach a
vm with no
.$parent property.
Next, we hit a lengthy return statement.
We’ll start taking a look at that return statement in the next post.
Next Post: | https://medium.com/@oneminutejs/a-deep-dive-in-the-vue-js-source-code-12-the-generatecomponenttrace-function-8d4a20dadd5c | CC-MAIN-2019-18 | refinedweb | 882 | 59.19 |
February 22, 2008
Confirmations
by Doug Noland
It was volatile from beginning to end. For the week, the Dow added 0.3% (dow
6.7% y-t-d) and the S&P500 0.2% (down 7.8%). The Transports declined 0.5%
(up 2.4%), and the Utilities fell 1.0% (down 8.1%). The Morgan Stanley Consumer
index was unchanged (down 6.3%), while the Morgan Stanley Cyclical index added
0.6% (down 5.4%). The small cap Russell 2000 declined 0.9% (down 9.2%), while
the S&P400 Mid-Caps gained 0.7% (down 6.7%). The NASDAQ100 slipped 0.4%
(down 14.9%), and the Morgan Stanley High Tech index dipped 0.1% (down 13.5%).
The Semiconductors rallied 1.2% (down 13.8%). The Street.com Internet Index
increased 0.4% (down 9.9%), and the NASDAQ Telecommunications index gained
2.4% (down 9.3%). The Biotechs sank 2.9% (down 10.1%). The Broker/Dealers declined
0.5% (down 7.4%), while the Banks added 0.2% (down 1.1%). With Bullion surging
$43.05, the HUI Gold index advanced 6.4% (up 13.5%).
Three-month Treasury bill rates declined 4 bps this past week to 2.19%. Two-year
government yields jumped 15 bps to 2.06%. Five-year T-note yields rose 10 bps
to 2.86%, and ten-year yields increased 4 bps to 3.81%. Long-bond yields were
little changed at 4.58%. The 2yr/10yr spread ended the week at 175 bps. The
implied yield on 3-month December '08 Eurodollars jumped 23 bps to 2.615%.
Benchmark Fannie MBS yields surged 22 bps to 5.73%, again this week dramatically
under-performing Treasuries. This put the two-week rise in benchmark MBS yields
at a stunning 54 bps, with spreads versus Treasuries widening to the widest
level in eight years (192bps). The spread on Fannie's 5% 2017 note was little
changed at 71 bps and the spread of Freddie's 5% 2017 note little changed at
70. The 10-year dollar swap spread increased 6.25 to 72.75, the widest since
year-end. Corporate bond spreads were wider, especially in the (dislocated)
investment-grade sector. An index of junk bonds spreads declined 17 bps.
There was little debt issuance this week.
Convert issuance included NASDAQ Stock Market $425 million and Silver Standard
$120 million.
German 10-year bund yields rose 5 bps to 4.0%, while the DAX equities index
slipped 0.4% (down 15.6% y-t-d). Japanese "JGB" yields were unchanged at 1.45%.
The Nikkei 225 declined 0.9% (down 11.8% y-t-d and 25.4% y-o-y). Emerging debt
and equities markets were, again, quite volatile. Brazil's benchmark dollar
bond yields dropped 17 bps to 5.78%. Brazil's Bovespa equities index surged
5.4% (up 1.1% y-t-d). The Mexican Bolsa rallied 2.7% (unchanged y-t-d). Mexico's
10-year $ yields added 2 bps to 5.28%. Russia's RTS equities index rallied
4.5% (down 9.2% y-t-d). India's Sensex equities index sank 4.2%, extending
y-t-d declines to 14.5%. China's Shanghai Exchange sank 2.8% this week (down
16.9% y-t-d).
Freddie Mac posted 30-year fixed mortgage rates surged 32 bps this week to
6.04%. Mortgage rates were up 56 bps in four weeks, with borrowing costs now
down only 18 bps from the year ago level. Fifteen-year fixed rates jumped an
extraordinary 39 bps to 5.64% (down 33bps y-o-y). One-year adjustable rates
dipped 2 bps to 4.98% (down 51bps y-o-y).
Bank Credit declined $22.3bn during the most recent data week (2/13) to $9.315
TN. Bank Credit has posted a 30-week surge of $671bn (13.5% annualized) and
a 52-week rise of $943bn, or 11.3%. For the week, Securities Credit rose $9.6bn.
Loans & Leases dropped $31.9bn to $6.851 TN (30-wk gain of $527bn). C&I
loans added $0.3bn, with one-year growth of 21.1%. Real Estate loans fell $9.6bn
(up 7.0% y-o-y). Consumer loans increased $2.0bn. Securities loans dropped
$14.3bn, and Other loans declined $10.4bn. Examining the liability side, Deposits
declined $20.4bn.
M2 (narrow) "money" supply rose $16.3bn to a record $7.586TN (week of 2/11).
Narrow "money" expanded $123bn over the past six weeks, with a y-o-y rise of
$484bn, or 6.8%. For the week, Currency added $0.8bn, while Demand & Checkable
Deposits dropped $25.1bn. Savings Deposits jumped $33.4bn, while Small Denominated
Deposits dipped $2.1bn. Retail Money Fund assets increased $9.4bn.
Total Money Market Fund assets (from Invest Co Inst) rose another $20bn
last week (7-wk gain $295bn) to a record $3.408 TN. Money Fund assets
have posted a 30-week rise of $824bn (55% annualized) and a one-year increase
of $988bn (41%).
Asset-Backed Securities (ABS) issuance slowed to $2.2bn. Year-to-date total
US ABS issuance of $31bn (tallied by JPMorgan) is running about 30% of the
level from comparable 2007. Home Equity ABS issuance of $197 million compares
to $53bn in early 2007. Year-to-date CDO issuance of $1.8bn compares to the
year ago $40bn.
Total Commercial Paper sank $17.8bn to $1.817 TN. CP has declined
$406bn over the past 28 weeks. Asset-backed CP fell $11.7bn (28-wk
drop of $411bn) to $784bn. Over the past year, total CP has contracted
$198bn, or 9.8%, with ABCP down $272bn, or 25.7%.
Fed Foreign Holdings of Treasury, Agency Debt last week (ended 2/20) jumped
$17.3bn to $2.130 TN. "Custody holdings" were up $73.8bn y-t-d, or 23.3% annualized,
and $304bn year-over-year (16.6%). Federal Reserve Credit expanded $8.5bn last
week to $867bn. Fed Credit has contracted $6.7bn y-t-d, or 5.0% annualized,
while having expanded $15.2bn y-o-y (1.8%).
International reserve assets (excluding gold) - as accumulated by Bloomberg's
Alex Tanzi - were up $1.347 TN y-o-y, or 27%, to a record $6.344 TN.
Global Credit Market Dislocation Watch:
February 19 - Bloomberg (Jody Shenn): "The extra yield that investors demand
to own agency mortgage-backed securities over 10-year U.S. Treasuries rose
to an eight-year high as record spreads on other debt hurt demand."
February 21 - Financial Times (Sarah O'Connor, Robert Cookson and Michael
Mackenzie): "Credit markets were thrown into fresh turmoil yesterday as the
cost of protecting the debt of US and European companies against default surged
to all-time highs. The sharp jump, which rivalled the market swing at the height
of last summer's credit shake-out, came as investors unwound highly leveraged
positions in complex structured products. The move was in part prompted by
fears of further unwinding as investors rushed to exit before conditions worsened.
'There's a domino effect taking place,' said Mehernosh Engineer, credit strategist
at BNP Paribas. 'We are unwinding three years of excesses in the space of three
days.' The cost of insuring the debt of the 125 investment-grade companies
in the benchmark iTraxx Europe rose by more than 20% to as high as 136.9 basis
points... That compares with about 51bp at the start of the year."
February 21 - The Wall Street Journal (Carrick Mollenkamp): "The global financial
crisis, sparked by troubles in risky mortgage investments, is rapidly spreading
into a much larger area: the market for securities tied to the credit of the
world's corporations. U.S. and European indexes that track the likelihood of
corporate defaults are flashing red as traders and investors fret about the
outlook for the global economy and the possibility of blowups among some $6
trillion in complex securities tied to the value of corporate bonds... While
defaults among companies remain relatively low, the indexes' moves could prove
to be self-fulfilling prophecies, incurring heavy losses for investors and
making it even harder for people and companies to borrow money. Adding to the
anxiety: Analysts can only guess at the volume of investments tied to the indexes,
who is holding them and what it would take to trigger a full-scale selloff."
February 20 - Bloomberg (Abigail Moses and Shannon D. Harrington): "The cost
of protecting corporate bonds from default soared to a record as investors
purchased credit-default swaps to hedge against mounting losses in the $2 trillion
market for collateralized debt obligations. 'The market is full of rumors of
unwinding of CDOs, and the price action suggests that people believe the rumors,'
said Peter Duenas-Brckovitch, head of European credit trading at Lehman...
'It sort of has that Armageddon feel, and the market is feeding on itself.'
Constant proportion debt obligations, which package indexes of credit-default
swaps, may have to unwind about $44 billion of assets, UniCredit SpA analyst
Tim Brunne in Munich said... Some so-called synthetic CDOs that sold credit-default
swaps on an estimated $1 trillion in debt also are at risk as investors grow
concerned about plunging market values, Morgan Stanley analysts led by Sivan
Mahadevan wrote... 'The mark-to-markets on these have got to be pretty nasty,'
said Byron Douglass, an analyst at Credit Derivatives Research LLC... 'I would
imagine that as spreads go wider, more and more CDOs are probably being unwound.'"
February 19 - Financial Times (Chris Hughes): "Credit Suisse cast itself as
the champion of risk management and transparency among investment banks when
it unveiled record annual profits last week. But Tuesday's revelation of a
$2.85bn mark-down in its trading book has undermined the reputation for prudence
that it has so assiduously tried to cultivate. Losses in the... bank's structured
credit book emerged early last week, although Brady Dougan, the chief executive,
says he was unware of the situation when he presented its results on February
12. The difficulties centre on the bank's trading inventory in residential
mortgage-backed securities (RMBSs) and collateralised debt obligations (CDOs)...
As the losses worsened, the bank was unaware of what was going on... The result
- a $2.85bn hit on the trading book which, after adjusting for lower revenue-related
bonus payments and tax, will dent first quarter net income by $1bn."."
February 21 - Bloomberg (Aaron Kirchfeld and Neil Unmack): "Dresdner Bank
AG, Germany's third-largest bank, agreed to rescue its $18.8 billion K2 structured
investment vehicle, joining Citigroup Inc. and HSBC Holdings Plc in putting
capital at risk to bail out investment funds... Dresdner...will provide a credit
line to enable K2 repay all of its senior debt... Dresdner will cut the size
of the fund, which has been reduced from $31.2 billion since July, according
to the statement."
February 20 - Bloomberg (Neil Unmack): "Standard Chartered Plc abandoned a
plan to refinance its $7.15 billion Whistlejacket Capital Ltd. structured investment
vehicle, the largest SIV run by a bank to collapse. The London-based bank blamed
the 'continuing deterioration in the market' for its decision... Whistlejacket
will become the sixth SIV to default if it doesn't make a payment by Feb. 21
when a three-day grace period ends..."
February 22 - Bloomberg (Christopher Condon): "Northern Trust Corp. agreed
to provide capital to some of its money-market funds if they suffer losses
on debt issued by Whistlejacket Capital LLC and White Pine Finance LLC. The...
bank may provide as much as $229 million to eight funds managing net assets
of $85.7 billion..."
February 20 - Bloomberg (Patricia Kuo and Edward Evans): "KKR Financial Holdings
LLC, Kohlberg Kravis Roberts & Co.'s $18 billion publicly traded credit
fund, delayed repaying some of its asset-backed commercial paper and started
restructuring talks with its creditors.... About half the debt will be due
by March 3 instead of Feb. 15, with the rest owed on March 25. The talks come
less than six months after the fund received a $230 million cash infusion from
investors after being hurt by losses on residential mortgages..."
February 21 - Bloomberg (Darrell Preston): ... 'The large numbers
of recent auction failures, which are reported to have occurred due to a reduction
in bidding by broker-dealers, appears to indicate those concerns were well
founded.'"
February 22 - Bloomberg (Jeremy R. Cooke): % or higher
on some bonds. Auctions covering as much as $26 billion of bonds a day failed
to attract enough buyers since Feb. 13..."
February 22 - Bloomberg (Jenny Strasburg): "AQR Capital Management LLC's largest
hedge fund fell almost 15% this year through Feb. 15 as market swings tripped
up computer models the managers use to make trades... The assets of AQR's Absolute
Return fund dropped to $2.9 billion last month from $4 billion in the fourth
quarter... Quantitative managers who rely on computers to make trades have
struggled as global equity markets declined..."
February 20 - Bloomberg (Pierre Paulden, Caroline Salas and Jody Shenn): CEO Trezevant Moore. Six
lenders are offering five times leverage... while a year ago, 20 banks extended
33 times, he said. Wall Street firms, reeling from $146 billion in losses on
their debt holdings, are fueling a credit crisis by clamping down on lending
to investors and hedge funds that use borrowed money to purchase securities."
February 21 - Bloomberg (Christopher Condon and Michael McDonald): "State
regulators are scrutinizing sales of auction-rate securities by closed-end
mutual funds as investors complain they can't get out of the investments, which
were billed as the equivalent of cash. Massachusetts Secretary of State William
Galvin asked nine fund companies for information about failed auctions that
left investors unable to sell their holdings, his office said in a statement
yesterday. Ohio Attorney General Marc Dann may file lawsuits after state funds
bought the securities, spokeswoman Jennifer Brindisi said yesterday in an e-mail.
'I wanted something as good as cash, and now I've got a lot of money in there
that I needed to get at quickly,' Aaron E. Some, an investor in Delray Beach,
Florida, said... The investor said he has $4.5 million tied up in auction-rate
securities issued by closed-end funds."
February 21 - Bloomberg (Hugh Son): "MasterCard Inc., the second-biggest payment-card
network, said it may be unable to sell about $252 million in auction-rate securities
because of a 'failure' of the bidding mechanism... 'There may be no effective
mechanism' for selling the securities, which are collateralized by U.S. student
loans, the firm said."
February 20 - Financial Times (Robert Cookson and Gillian Tett): ten times bigger than the underlying cash bonds on which the
CDS are based."
February 20 - The Wall Street Journal (Rob Curran): "Options can. The more a stock rises or falls, the more
a bank must buy or sell to hedge its risk. As a result, brokers are buying
when markets rise and selling when they fall, and they're doing so in greater
volumes. That may well be exacerbating stock moves in each direction, said
Lars Kestner, a managing director in equity derivatives at Deutsche Bank..."
February 21 - Bloomberg (Jody Shenn): ."
February 21 - Bloomberg (Pierre Paulden): "The ratio of high-risk, high-yield
loans trading at distressed levels has surged to 8.13%, the highest in five
years, from 4.65% at the end of January, according to Wachovia Corp. Distressed
loans, defined as those that trade below 80 cents on the dollar, may have a
25% chance of defaulting within a year..."
February 22 - Bloomberg (David Mildenberg): "GMAC LLC, the lender partially
owned by General Motors Corp., agreed to loan as much as $750 million to its
residential mortgage unit as it seeks to sell a business that finances vacation
resorts. Residential Capital LLC borrowed $635 million under the agreement
yesterday..."
February 20 - Bloomberg (Bryan Keogh): "A record 41 companies with high-yield,
high-risk credit ratings are in danger of breaching terms of their loan agreements
within 12 months as the slowing economy cuts into corporate profits, Moody's...
estimates."
February 18 - Bloomberg (Gonzalo Vina and Jon Menon): "Northern Rock Plc,
which suffered the first run by U.K. bank depositors in more than a century,
may remain nationalized for years to come, according to the chairman appointed
by Prime Minister Gordon Brown's government. 'We are clearly talking about
a period of some years,' Ron Sandler...said..."
Currency Watch:
The dollar index fell 0.8% this week to 75.52. For the week on the upside,
the Brazilian real increased 1.9%, the New Zealand dollar 1.8%, the Norwegian
krone 1.6%, the Swiss franc 1.6%, the Swedish krona 1.3%, the Taiwanese dollar
1.3%, and the Euro 1.2%. On the downside, the South African rand declined 1.5%,
the Canadian dollar 0.5%, and the South Korean won 0.4%.
Commodities Watch:
February 22 - China Knowledge (Kartik Goyal): "China has surpassed the U.S.
and Turkey as the world's second largest market for gold jewelry, only next
to India, according to... the World Gold Council. Gold sales in the Greater
China area, including Hong Kong, Macau and Taiwan totaled 363.3 tons during
last year, surging 23% from a year earlier..."
February 19 - Bloomberg (Tony Dreibus and Jeff Wilson): "The biggest rally
in the history of wheat trading defied even some of the best conventional wisdom,
humbling forecasters from Goldman Sachs Group Inc. to the U.S. government.
Wheat has more than doubled since May, reaching a record $11.53 a bushel on
Feb. 11 and driving up costs for everything from Eggo waffles and Italian pasta
to Pakistani flatbreads and Japanese pastry."
February 19 - Financial Times: "The price of steel is set to rise after Asian
and European producers agreed to pay up to 71% more for iron ore in term-contract
rates beginning on April 1... The big rise suggests demand for commodities
from emerging economies such as China remains strong, offsetting the US slowdown
and fuelling fears that global inflation will continue to rise in the short
term."
February 19 - Financial Times (Chris Flood and Javier Blas): "Coffee, cocoa
and tea markets are nearing boiling point, with prices at multi-year peaks
as supportive demand and supply conditions and fears about foodinflation have
fuelled high levels of speculative buying. 'Tight fundamentals tend to exacerbate
speculative investment,' says Nestor Osorio, executive director of the International
Coffee Organisation..."
February 19 - Financial Times (Javier Blas): "Tea prices are likely to jump
to an all-time high this year, underpinned by production disruptions in Kenya...
In the latest sign of rising global food inflation, wholesale tea prices surged
last year to an annual average of $1.95 a kilogram, a 6.5% increase from the
previous year and the highest annual level since 2002. Average tea prices "are
expected to reach even higher and possibly record levels" in 2008 following
a 10% reduction in shipments from Kenya..."
Gold surged 4.8% to $946, and Silver jumped 5.4% to $18.03. May Copper rose
7.5%. April Crude gained $3.64 to $99.09. March Gasoline jumped 3.5%, and March
Natural Gas gained 2.1%. March Wheat increased 2.1%. The CRB index surged 3.8%
to a new record (up 11.1% y-t-d). Coffee jumped to a 10-year high, increasing
y-t-d gains to 19%. The Goldman Sachs Commodities Index (GSCI) rose 3.5% to
a new record (up 8.2% y-t-d and 46.8% y-o-y).
China Watch:
February 19 - Bloomberg (Nipa Piboontanasawat): "China's inflation accelerated
to the quickest pace in more than 11 years after the worst snowstorms in half
a century disrupted food supplies. Consumer prices rose 7.1% in January from
a year earlier... Food prices soared 18% after blizzards paralyzed transport
systems and destroyed crops."
February 22 - Bloomberg (Nipa Piboontanasawat and Li Yanping): "China... said
inflation will remain at a high level in the first half of 2008 and the central
bank will use interest rates to control prices. China 'needs to bring out monetary
policy to control demand expansion and stabilize inflation expectations,' the
People's Bank of China said..."
February 21 - Bloomberg (Luo Jun): "Chinese banks face higher bad-loan ratios
for the first time since 2003 as corporate defaults may increase because of
tighter credit controls and weakening demand from a slowing U.S. economy, Standard & Poor's
said. 'Challenges are looming on the corporate lending front,' Liao Qiang,
a Beijing-based analyst at S&P, said..."
February 20 - Bloomberg (Tian Ying): "China's passenger car sales rose 19.8%
in January on demand ahead of the Chinese new year. Automakers in the country
sold a total of 661,900 cars during the period..."
February 19 - Bloomberg (Belinda Cao): "China will explore more channels to
invest its $1.5 trillion currency reserves, the world's biggest, for 'higher
returns,' the central bank said. The government will allow local companies
and individuals more leeway to convert their yuan holdings into foreign currencies
to invest overseas..."
Japan Watch:
February 22 - Bloomberg (Keiko Ujikane): "Japan's government lowered its assessment
of the economy for the first time in 15 months, saying growth will moderate
as exports and production cool."
India Watch:..."
Unbalanced Global Economy Watch:
February 20 - Bloomberg (Jennifer Ryan): "U.K. money supply growth unexpectedly
accelerated in January, the Bank of England said. M4...rose 12.9% from a year
earlier, compared with 12.3% in December..."
February 21 - Bloomberg (Simon Kennedy): "French inflation accelerated in
January to the fastest pace in at least 12 years, led by higher food and energy
costs. Consumer prices climbed by an annual 3.2%, up from 2.8% in December..."
February 21 - Financial Times (Ralph Atkins): "An inflation-beating 5.2% wage
increase secured by German steelworkers... stoked fears that stubbornly high
eurozone inflation pressures would prevent the European Central Bank from cutting
interest rates in the near future."
February 21 - Bloomberg (Simone Meier and Joshua Gallu): "Swiss producer and
import prices jumped to the highest level in almost 20 years in January, adding
to signs that inflation pressure is mounting. Prices for factory and farm goods
as well as imports increased 3.7% from a year earlier, the biggest gain since
Sept. 1989..."
February 19 - Bloomberg (Christian Wienberg): "Denmark's government should
limit spending as unemployment at a 33-year low threatens to spark a 'wage
spiral' and push up inflation, the Organization for Economic Cooperation and
Development said. 'Avoiding overheating is an urgent challenge,' the...OECD
said..."
February 22 - Bloomberg (Flavia Krause-Jackson and Giovanni Salzano): "Italy's
inflation rate for frequently bought goods such as food and gasoline surged
to the highest since in more than a decade... Consumer prices for frequent
purchases jumped 4.8% in January from a year earlier..."
February 19 - Bloomberg (Jacob Greber): "Australian inflation may accelerate
to almost 4% as a labor shortage worsens, central bank official Malcolm Edey
said... The striking thing is the contrast between domestic and international
conditions,' Assistant Governor Edey told business leaders... 'The Australian
economy to date has stayed robust and the main domestic challenges are those
of strong demand, tight capacity and inflationary pressures.'"
Latin America Watch:
February 20 - Bloomberg (Matthew Craze): "Argentine truck drivers negotiated
a 19.5% wage increase with the government and their employers, setting a precedent
for wage talks with other salaried workers in the South American country."
Central Banker Watch:
February 19 - Bloomberg (Francois de Beaupuy): "The Bank of France said the
U.S. Federal Reserve may have cut interest rates too much and too quickly in
response to financial-market declines. An unsigned article in the...bank's
monthly bulletin...said new financial products have amplified asset price swings.
That may lead to 'stronger Monetary reactions than what would otherwise be
necessary, as shown by the recent decision of the Federal Reserve...' The unusual
criticism by one central bank of another may reflect the European Central Bank's
reluctance to follow its U.S. and U.K. counterparts in cutting rates..."
Bursting Bubble Economy Watch:
February 21 - The Wall Street Journal (Greg Ip): ...
A simultaneous rise in unemployment and inflation poses a dilemma for Fed Chairman
Ben Bernanke. When the Fed wants to fight unemployment, it lowers interest
rates. When it wants to damp inflation, it raises them. It's impossible to
do both at the same time."
GSE Watch:
February 22 - Bloomberg (Jody Shenn): Freddie Mac, the second-largest provider
of money for U.S. home loans, implemented new fees for 'higher risk' debt it
buys or guarantees and said it will no longer accept most mortgages that exceed
97% of a home's value."
Mortgage Finance Bust Watch:
February 20 - Bloomberg (Alison Vekshin): "U.S. savings and loans posted a
record $5.24 billion loss in the fourth quarter of 2007 as housing-market distress
continued to take a toll... The loss stemmed from $4.07 billion in 'goodwill'
writedowns and $5.12 billion set aside for anticipated loan losses, the Treasury
Department's Office of Thrift Supervision said... 'Looking forward, I think
2008 is going to be a very difficult year for the industry,' OTS Director John
Reich said."
Muni Watch:
February 19 - Bloomberg (Michael McDonald): "Drivers on the Massachusetts
Turnpike may face higher tolls after the state was unable to sell auction-rate
securities backed by a unit of Ambac Financial Group Inc. The Massachusetts
Turnpike Authority was forced to delay refinancing $126.7 million it borrowed
for the 'Big Dig' because it bought bond insurance from troubled Ambac... The
state agency is spending an additional $300,000 in interest a month as a result...."
California Watch:
February 20 - Bloomberg (William Selway): "California Governor Arnold Schwarzenegger
ordered state agencies to stop hiring and scrap new equipment purchases as
part of a plan to save $100 million this year and help close a budget shortfall...
The order follows lawmakers' Feb. 15 passage of measures to cut about $1 billion
from this year's budget as revenue growth slows... Before the spending cuts,
California faced a $14.5 billion deficit through June 2009."
February 20 - Bloomberg (Michael B. Marois): "California's budget deficit
widened to $16 billion as the housing slump and higher energy costs cut tax
revenue, the state's fiscal analyst said. The deficit is $2 billion larger
than what Governor Arnold Schwarzenegger predicted in January... Fitch Ratings
has warned that California's credit ratings on $49 billion of debt are in danger."
February 21 - Bloomberg (Michael B. Marois): "California, the biggest borrower
in the U.S. municipal bond market, will replace $1.25 billion of auction-rate
bonds with traditional debt after a series of auction failures nationwide sent
rates soaring."
February 21 - Bloomberg (Michael B. Marois): "Vallejo, California, may become
the first city in the state to file for bankruptcy should labor negotiations
fail to cuts costs before officials run out of money by May 1... 'This is a
last resort,' City Councilwoman Stephanie Gomes, referring to bankruptcy, said...
'We're in a state of crisis here. This isn't a threat.'"
Fiscal Watch:
February 22 - New York Times (Edmund L. Andrews and Louis Uchitelle): "... But with the current efforts to
arrest the housing collapse so far bearing little fruit, Washington is being
forced to explore new ideas, among them the idea of a federal mortgage guarantee
for troubled borrowers."
Speculator Watch:
February 21 - Bloomberg (Tomoko Yamazaki): "Hedge funds around the world had
the worst month in at least eight years in January as equities worldwide tumbled
amid concerns that the U.S. economy was headed for a recession... The Eurekahedge
Hedge Fund Index, which tracks the performance of 2,467 funds that invest globally,
dropped 3.3%, based on preliminary figures..."
February 22 - Financial Times (Henny Sender): "New York hedge fund DB Zwirn & Co
is winding down its principal funds after investors - rattled by lapses in
internal controls... - said they would withdraw more than $2bn. Investors started
pulling their money after the group, which has almost $5bn under management,
disclosed in March last year that an independent internal review had uncovered
improper transfers among funds and improper handling of operational expenses...
On Thursday night, Zwirn sent a letter to investors outlining its plans to
liquidate assets, about 60% of which are not easily tradable and mostly involve
illiquid loans made both in the US and abroad."
February 21 - Dow Jones (Kaja Whitehouse): "Two China-focused hedge funds
that returned 100% or more last year posted double-digit percentage losses
in January, tripping over continued volatility in China's stock market. The
788 China Fund...lost 39.5% in January, following gains of 115% last year...
The Golden China Fund...lost 21.5% in January, following gains of 100.3% last
year..."
Crude Liquidity Watch:
February 20 - Bloomberg (Matthew Brown): "Inflation in the Middle East may
be stoked by recent snowstorms in China that damaged wheat crops, said Royal
Bank of Scotland Group Plc. The Middle East is the world's largest importer
of wheat, so any increase in the price of the commodity will fuel food prices...
'Wheat prices are already a serious problem for the Middle East, while recent
snowstorms in China may aggravate the problem,' Simpfendorfer said... 'This
risk underscores our view that the Middle East economies will continue to face
serious inflation risks, so the case for adopting basket pegs or permitting
faster appreciation is strong.'"
February 19 - Bloomberg (Matthew Brown): "Mortgage lending growth in the United
Arab Emirates slowed to an annual 83% at the end of the third quarter from
97% in the second quarter. Loan and overdraft growth remained at 25% in the
third quarter... M3 money supply growth slowed to 34%..."
Confirmations:
There have been several key CBB themes for early 2008. First, expect Credit
problem-associated economic weakness to gain momentum. Second, we're witnessing
the evolving "Breakdown of Wall Street Alchemy." Third, watch for especially
atypical Federal Reserve-induced "reflation" dynamics. Fourth, the year will
likely mark the bursting of the "leveraged speculating community" Bubble. And,
fifth, the unfolding Credit Crisis will become especially problematic as it
converges toward our system's functional "money" supply - the anchor of our
monetary system. This week saw important Confirmations with respect to all
of the above.
Economic weakness related to faltering Availability of Credit and Marketplace
Liquidity has been especially prominent in the data of late. This week was
no exception. The February reading for the Philadelphia Fed index sank to the
lowest level since February 2001. January Housing Starts were reported near
the weakest levels since the early nineties' recession, while early February
auto sales are said to the running 16% below last year's level. Out in California,
dwindling state revenues led legislative analysts to raise the estimate of
the state's budget shortfall to an alarming $16bn, an increase of $1.5bn from
last month's guesstimate.
Despite the faltering U.S. economy, pricing pressures are accelerating - a
dynamic I've heard referred to as the "new conundrum." January Consumer Prices
were up 4.3% y-o-y. Major commodity price indexes surged to yet new record
highs and, if anything, inflationary pressures are broadening. I've suggested
that this "reflation" will have consequences divergent from those of the past.
With the historic Bubble in "Wall Street Finance" now bursting, the powerful
monetary mechanism linking Fed rate cuts directly to asset price inflation
(in particular, real estate, risky debt, and stocks) has been severely impaired
if not completely severed. The link between U.S. interest rates, dollar devaluation,
faltering confidence in currencies in general, and inflating commodities prices
has never been stronger.
During the 2001-2003 "reflation", hedge fund managers were quoted as saying "the
Fed wanted me to buy stocks and junk bonds." Today, the Fed would surely hope
to send a similar message, while the sophisticated interpret things altogether
differently. Today, investors and speculators alike are much keener to buy
precious metals, energy, and commodities. And while the U.S. economy is succumbing
to powerful recessionary forces, it is no longer the sole global engine of
(Credit and economic) growth.
It's worth repeating that global Credit expansion remains brisk, while Bubble
dynamics and economic growth remain in place throughout Asia, the Middle East
and in the "emerging economies," especially for the powerful boom in Bric countries
( Brazil, Russia, India and China). In concert with the bursting of the Wall
Street Bubble, global inflationary dynamics now strongly favor "things" as
opposed to securities. In particular, necessities and Stores of Value available
in relatively limited supply are seeing extraordinary inflationary effects.
Here we see one of the key dynamics (Monetary Processes) differentiating the
current "reflation" from those of the past: Previous Wall Street finance-dominated
inflationary booms enveloped the securities markets, where the supply of stocks
and bonds could be readily expanded to meet booming demand. Today, the world
is faced with a very challenging prospect of increasing the supply of crude,
natural gas, ethanol, precious metals, industrial metals, wheat, soybeans,
grains, coffee, cocoa, tea and literally scores of things today in great demand
by end users, investors, and a bloated leveraged speculating community. Keep
in mind that foreign official reserves have inflated $1.35 TN over the past
year - and the U.S. is still on track for yet another year of massive Current
Account Deficits. Recognize also that the hedge fund and sovereign wealth fund
communities have ballooned to the multi-trillions. U.S. deficits and resulting
dollar devalulation continue to spur unwieldy Bubbles in China, India, the
Middle East and elsewhere.
The bottom line is the world remains awash in dollar liquidity that many are
content to exchange for tangible things deemed of greater value than suspect
U.S. financial assets. There is no inflation "conundrum," only increased supply
constraints and bottlenecks, global hoarding, and an unambiguous speculative
fever in markets for many of our economy's basic necessities.
This week provided Confirmation of a worsening backdrop for the leveraged
speculating community.
February 22 - Bloomberg (Hamish Risk): "Mathematical models that traders use
to calculate prices in the $2 trillion market for collateralized debt obligations
don't work anymore, according to UBS AG. The so-called correlation model, which
shows the odds of one default by an investment-grade company spreading to others
in a group, now exceeds 100%...said Geraud Charpin, a structured credit strategist
at UBS... 'The banks realize the model doesn't work and it needs to be changed,'
Charpin said... Banks are changing the model by reducing the amount of money
they expect to recover when a company defaults to 30% from 40%. That means
they have to protect against bigger potential losses by purchasing more credit-default
swaps, driving prices of the contracts to the highest on record."
And the breakdown in models is anything but limited to CDOs and the banking
community. Bloomberg today reported that AQR Capital Management, manager of
$35bn of assets, has suffered significant losses to begin the year. Its largest
hedge fund is said to be down 15% already, with slightly larger losses for
at least one of its smaller funds. This wasn't supposed to happen, and I'll
take this development as an important Confirmation that troubles that hit the "quant" fund
community this past summer have worsened significantly so far this year.
August was a terrible month for model-based quantitative strategies, although
most funds quickly recovered much of their losses as the markets stabilized
in the fall. This time, however, I do not expect the environment to accommodate.
Last summer's hope that the situation was a short-term aberration has been
replaced with this year's reality of Bursting Bubbles, Credit quagmires, model
breakdowns, hopelessly crowded trades, acute marketplace illiquidity and, it
would appear, highly problematic fund redemptions. Importantly, the game of
leveraged speculation - albeit "quant' fund strategies, "market neutral," or "global
macro" - works wonderfully only as long as the industry enjoys (as it had for
years) robust inflows.
A steady flow of incoming funds for years ensured ample liquidity to build
positions, press bets, increase leverage, and bolster the perception of endless
marketplace liquidity, while working to boost industry returns (and overheated
expectations). A reversal of flows - especially when abrupt and significant
in scope - would pose quite a dilemma for individual funds, the industry overall,
and the impaired U.S. Credit system. It will be interesting to follow how the "quant" types
respond an environment of model breakdown, losses, redemptions, and forced
position unwinds. I have a hunch they are not well "programmed" for such a
radical change in the environment. We can only speculate as to whether the
industry is on the precipice of major redemptions.
DB Zwirn obviously has its own issues. But it would provide an interesting
case study in fund dynamics. At one point, it was a booming fund group with
a stellar reputation, stellar returns, 15 offices worldwide and more than 1,000
employees. It is now suffering catastrophic unwinding, faced with the specter
of huge redemptions and illiquid positions (including Credit derivatives).
Apparently some positions will take up to four years to unwind. Investors that
believed they had the option to redeem their interests now confront the likelihood
of significant losses and long delays in the return of their capital. I suspect
this will be an increasingly common industry predicament.
Throughout the markets, this week provided further Confirmation of serious
liquidity constraints. The unfolding Breakdown in Wall Street Alchemy was underscored
by further issues with SIVs and the auction-rate securities fiasco. Many individuals,
funds, and corporations that believed they had invested in safe and liquid
("money-like") "cash equivalents" now instead hold illiquid positions in long-term
debt instruments of varying quality.
February 22 - Reuters: "Ethanol maker Aventine Renewable Energy Holdings Inc
warned on Friday it may be forced to delay construction of two new plants because
some of its assets have become unexpectedly tied up in investment securities...
Aventine... said it may not be able to sell its investments in auction-rate
securities (ARS), forcing it to draw on revolving bank debt or delay work on
the plants that are expected to begin operating early next year... Aventine's
securities carry AAA ratings and are backed by federal student loan guarantees.
Chief Financial Officer Ajay Sabherwal said the company would not immediately
try to sell them into the moribund auction market, but would likely need to
do so in the next few months. 'Should we not be able to liquidate a substantial
portion of the remaining portfolio of these ARS securities on a timely basis
and on acceptable terms, we will have to either attempt to raise additional
funds or slow down the construction of our new facilities, or both...'"
February 22 - Dow Jones (Michael Rapoport): "No one knows for sure who the
next Bristol-Myers Squibb Co. might be, but one thing's for sure: There's no
shortage of candidates. Dozens of companies have warned of potential problems
with their holdings of auction-rate securities, a survey by Dow Jones Newswires
has found... The market for these securities has seized up recently, prompting
some companies that hold them, like Bristol-Myers, to write down their value.
Beyond those dozens, other companies, including some big names, hold large
amounts of these securities... Without buyers, the securities aren't liquid.
And since many companies classify them as short-term 'available-for-sale' investments
that are supposed to be marked to market... As it happens, some auditors and
their clients had a dispute a few years ago about whether to treat auction-rate
securities as equivalent to cash... FASB ultimately decided not to address
the issue of how to account for the auction-rate securities. A lot of holders
of the securities are probably wishing they had that same option right now."
I have in past analysis suggested that the perceived soundness of U.S. corporate
balance sheets was extending a "hook" for those of the bullish persuasion.
It was, after all, egregious Mortgage Finance Bubble excesses - and resulting
Household and Financial Sector balance sheets loaded with debt - that were
responsible for booming corporate cash-flows and relatively stable balance
sheets. Well, these days it is becoming increasingly apparent that a significant
portion of corporate America's "cash" hoard is stuck in "auction-rate securities" and
various other Credit instruments - today offering little in the way of actual
liquidity. This is a major unfolding issue.
Scores of companies, previously believing they enjoyed easy access to new
finance, now face the inability to raise new funds at any cost. At the same
time, scores of companies that thought they were sitting on piles of safe and
liquid financial resources now recognize they may instead be facing big losses.
Moreover, the recognition of problems on both the Asset and Liability side
of corporate balance sheets comes concurrently with the realization that the
so-called sound and resilient U.S. economy is in serious trouble. Simultaneously,
confidence in "money" and money-raising is faltering, with negative ramifications
for already liquidity-challenged markets and the already weakened real economy.
There is a confluence of factors behind this week's major widening in corporate
spreads, especially in the "safest" sectors. Major indices of investment grade
Credit spreads blew out to record wide levels. The "CDX" index widened 12 bps
to a record 157 bps, increasing its three-week gain to 50 bps. Agency debt
spreads widened 2 bps to the widest level since last November. Yet GSE MBS
spreads were this week's eye-opener. Fannie Mae benchmark MBS spreads surged
17 bps to 192, the widest spreads in eight years. For perspective, this spread
has averaged 76 bps since the end of 2002.
I will take the dramatic widening in investment grade and agency spreads as
Confirmation that the unfolding Credit Crisis has made a major leap toward
the heart of the Credit system. I have no way of knowing to what degree widening
spreads are being dictated by "technical" hedging-related trading dynamics,
as opposed to fundamental issues with the faltering U.S. economy; rapidly deteriorating
corporate balance sheets; a highly susceptible leveraged speculating community;
the vulnerable GSEs; a distressingly illiquid Credit market; and heightened
systemic risk more generally. To be sure, a strong case can be made the current
backdrop is quite detrimental to a highly leveraged and speculative Credit
system. The markets rallied late this afternoon - and perhaps they will rally
further next week - on talk of a bailout for troubled Ambac. Unfortunately,
there has been ample Confirmation that the Evolving Credit Crisis has quickly
spiraled way beyond the "monolines.". » | http://www.safehaven.com/article-9541.htm | crawl-002 | refinedweb | 7,138 | 57.47 |
UUID-like identifier generator.
More...
#include <adobe/zuid.hpp>
List of all members.
(Note that these are not member functions.)
The ZUID class implements a non-standard UUID (Universally Unique ID). The ZUID is generated with an algorithm based on one available from the Open Software Foundation, but with the following differences:
These changes where made to improve performance and avoid privacy issues of having a hardware specific address imbedded in documents. These changes increase the probability of generating colliding IDs but the probability is low enough to suffice non-mission critical needs.
The UUID code in this file has been significantly altered (as described above) and should not be used where a true UUID is needed. The MD5 code has only been altered for coding standards. The algorithm should still function as originally intended.
Definition at line 89 of file zuid.hpp.
[explicit]
Set this zuid to be the UUID. The UUID isn't changed
Parses strings of the style "d46f246c-c61b-3f98-83f8-21368e363c36" and constructs the zuid with it
Create a dependent zuid_t. Given an identical string and zuid_t it will always generate the same new zuid_t. This is useful if you have an object that has a unique name and you want to be able to get an ID for it given the ID of the parent object. The zuid_t is generated by running name_space and name (as UNICODE or ASCII) through MD5.
00000000-0000-0000-0000-000000000000
[related]
UUID-compliant storage for the ZUID
[static]
Always set to the null zuid 00000000-0000-0000-0000-000000000000
Definition at line 114 of file zuid.hpp.
Use of this website signifies your agreement to the Terms of Use and Online Privacy Policy.
Search powered by Google | http://stlab.adobe.com/classadobe_1_1zuid__t.html | CC-MAIN-2014-52 | refinedweb | 289 | 64.81 |
*
problem with dot-matrix printing
Hugh Lloyd
Greenhorn
Joined: Feb 20, 2013
Posts: 2
posted
Feb 20, 2013 07:18:17
0
Hi, wasn't too sure if i was meant to post this here or beginners, but here's the problem anyway:
I've written a program that prints out names onto 0.9cmx5cm labels on a dot matrix printer. it works... on the whole, but after i started to really
test
it i noticed that some characters didn't come out right. the main one is ü (u with 2dots above it) it comes out as ⁿ (small n). I've tried making a character variable as 0252 and 0x81 and 0xFC but it either comes out as the little n or a ? when printer out. I've also tried various ESC codes for the printer (Epson lq-300+II) to allow ASCII codes after 128 to be characters but nothing. It wouldn't be that bad but the old program (written in vb 1.0) has no problem with the characters. i'll paste some code as an example of how i try to print them out.
import java.io.FileOutputStream; import java.io.PrintStream; public class PrintTester { public static void main(String[] args){ PrintTester pt = new PrintTester(); pt.go(); } private void go() { String string = umlaut+" = "+oUmlaut; startPrinter(); ps.println(string); System.out.println(string); stopPrinter(); } private void startPrinter(){ try{ //open a connection to the printer os = new FileOutputStream("LPT1: "); ps = new PrintStream(os); System.out.println("Connected to printer"); //set up printer defaults ps.print(ESC); ps.print(AT); //asignCharTable(); //selectCharTable(); //EnhancedOn(); //selectSuper(); //select12CPI(); //setLineSpacing(); //setNLQ(); //upperControlCodes(); //allControlsAsChar(); //Bell(); System.out.println("Finished setting up printer"); }catch(Exception ex){ ex.printStackTrace(); System.out.print("Didnt connect to printer"); } } private void stopPrinter(){ try{ ps.flush(); ps.close(); os.close(); System.out.println("Printer Closed"); }catch(Exception ex){ System.out.println("Couldnt close printer"); } } private void select12CPI(){ ps.print(ESC); ps.print(M); System.out.println("Set 12CPI"); } private void setLineSpacing(){ ps.print(ESC); ps.print(OH); System.out.println("Set LineSpacing"); } private void setNLQ(){ ps.print(ESC); ps.print(X); ps.print(ONE); System.out.println("Set NLQ"); } private void selectSuper(){ ps.print(ESC); ps.print(S); ps.print(ONE); } private void EnhancedOn(){ ps.print(ESC); ps.print(DontKnow); } private void upperControlCodes(){ ps.print(ESC); ps.print(SIX); } private void allControlsAsChar(){ ps.print(ESC); ps.print(I); ps.print(Zero); } private void Bell(){ ps.print(SEVEN); } private void asignCharTable(){ ps.print(ESC); ps.print(PARENTHESIS_LEFT); ps.print(t); ps.print(THREE); ps.print(Zero); ps.print(Key1); ps.print(Key1); ps.print(Zero); } private void selectCharTable(){ ps.print(ESC); ps.print(t); ps.print(Key1); } //ESC Command Variables private static final char ESC = 27; //escape private static final char AT = 64; //@ private static final char M = 77; //M private static final char OH = 48; //0 private static final char LF = 10; private static final char k = 107; //k private static final char ONE = 49; //1 private static final char SIX = 54; //6 private static final char X = 120; //x private static final char t = 116; //t private static final char Zero = 0; //0 private static final char Key1 = 1; //1 private static final char THREE = 3; //3 private static final char S = 83; // S private static final char DontKnow = 71; private static final char SEVEN = 7; //7 private static final char I = 73; //I private static final char PARENTHESIS_LEFT = 40; // ( private static final char umlaud = 0x81; //ü private static char oUmlaut = 'ü'; PrintStream ps; FileOutputStream os; }
any idea's?
Paul Clapham
Bartender
Joined: Oct 14, 2005
Posts: 18858
8
I like...
posted
Feb 20, 2013 10:59:33
0
What charset is the printer expecting?
You'll have to look at the printer's documentation to see what code point it expects to receive for the ü character, for a start. Then you'll need to find out which
Java
charset represents the ü character by that code point. Finally, use a PrintWriter instead of a PrintStream so you can specify the charset. Right now your PrintStream is using your system's default charset, which apparently isn't the right choice.
Hugh Lloyd
Greenhorn
Joined: Feb 20, 2013
Posts: 2
posted
Feb 21, 2013 05:39:34
0
Thank you! that solved it!
I agree. Here's the link:
subject: problem with dot-matrix printing
Similar Threads
how to send raw data to printer
Print directly to dot matrix printer
Want to print a file using java program
Printing tab characters
Server Program - Responding To Post Requests From HTML pg.
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/605369/java/java/dot-matrix-printing | CC-MAIN-2014-49 | refinedweb | 780 | 55.03 |
You can subscribe to this list here.
Showing
3
results of 3
>>>>> "Kuzminski," == Kuzminski, Stefan R <SKuzminski@...> writes:
Kuzminski> Some notes on compiling GD backend for windows. 1)
Kuzminski> _gdmodule.c needs to be modified in 2 places to
Kuzminski> compile on windows
Hi Stephan,
I've been following your instructions on building gdmodule and I've
gotten pretty far. I have a couple of questions for you.
Did you use the prebuilt gd (not gdmodule) or the dll supplied at the
web site. I built it myself, and when I try to build gdmodule, I get
errors like
_gdmodule.obj : error LNK2001: unresolved external symbol _gdFontGiantRep
_gdmodule.obj : error LNK2001: unresolved external symbol _gdFontLargeRep
The export to these symbols are dependent on the following
preprocessor options (from gd.h)
#ifdef BGDWIN32
#define BGD_EXPORT_DATA_IMPL __declspec(dllexport)
#else
#ifdef WIN32
#define BGD_EXPORT_DATA_IMPL __declspec(dllimport)
#else
/* 2.0.20: should be nothing at all */
#define BGD_EXPORT_DATA_IMPL
#endif /* WIN32 */
#endif /* BGDWIN32 */
In gdfontl.c, there is some code
BGD_EXPORT_DATA_IMPL gdFontPtr gdFontLarge = &gdFontLargeRep;
I set BGDWIN32 option when building the gd DLL, but I still don't seem
to get _gdFontLargeRep exported to the gd.dll. For example, if I grep
the dll for gdFontLarge, I see that symbol but not gdFontLargeRep.
Ditto for the prebuilt bgd.dll.
Did you encounter this problem and do you have any insights here?
Thanks,
John Hunter
Hi again,
I'm. | http://sourceforge.net/p/matplotlib/mailman/matplotlib-users/?viewmonth=200402&viewday=3 | CC-MAIN-2015-48 | refinedweb | 233 | 58.48 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.